diff --git a/assets_external/readme.txt b/assets_external/readme.txt new file mode 100644 index 0000000..41f8f70 --- /dev/null +++ b/assets_external/readme.txt @@ -0,0 +1,6 @@ +This directory is something of a placeholder for larger assets than +should be committed into git along with everything else. It might +eventually be absorbed into something like git-lfs, or it might move +to some other format. + +For now: Don't commit things here into git, just sync them manually. diff --git a/posts/2016-10-04-pi-pan-tilt-2.md b/posts/2016-10-04-pi-pan-tilt-2.md index 35440e7..80e0af1 100644 --- a/posts/2016-10-04-pi-pan-tilt-2.md +++ b/posts/2016-10-04-pi-pan-tilt-2.md @@ -11,11 +11,11 @@ little further on this might have seen that I made an apparatus that captures a series of images from fairly precise positions, and then completely discards that position information, hands the images off to [Hugin][] and [PanoTools][], and has them crunch numbers for awhile to -derive *the very same position information* for each image. +calculate *the very same position information* for each image. -That's a slight oversimplification - they also derive lens parameters, -they derive other position parameters that I ignore, and the position -information will deviate because: +That's a slight oversimplification - they also calculate lens +parameters, they calculate other position parameters that I ignore, +and the position information will deviate because: - Stepper motors can stall, and these steppers may have some hysteresis in the gears. @@ -35,15 +35,14 @@ help them along, so we may as well use the information. Also, these optimizations depend on having enough good data to average out to a good answer. Said data comes from matches between features -in overlapping images (say, using something like [SIFT][] and -[RANSAC][]). Even if we've left plenty of overlap in the images we've -shot, some parts of scenes can simply lack features (like corners) -that work well for this. We may end up with images for which -optimization can't really improve the estimated position, and here a -guess based on where we think the stepper motors were is much better -than nothing. - -(TODO: Stick a photo here to explain features? Link to my CV text?) +in overlapping images, say, using something like [SIFT][] and +[RANSAC][]. Even if we've left plenty of overlap in the images we've +shot, some parts of scenes can simply lack features like corners that +work well for this (see chapter 4 of +[Computer Vision: Algorithms and Applications][szeliski] if you're +really curious). We may end up with images for which optimization +can't really improve the estimated position, and here a guess based on +where we think the stepper motors were is much better than nothing. If we look at the [PTO file format][pto] (which Hugin & PanoTools use), it has pitch, yaw, and roll for each image. Pitch and yaw are @@ -60,18 +59,61 @@ grid in which I shot images. The only real conversion needed is to convert steps to degrees, which for these steppers means using 360 / 64 / 63.63895 = about 0.0884, according to [this][steps]. -With no refining, tweaking, or optimization, here is how this looks in -Hugin: +With no refining, tweaking, or optimization, only the per-image +stepper motor positions and my guess at the lens's FOV, here is how +this looks in Hugin's fast preview: -(supply screenshot here) +[![Hive13](../images/2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg){width=100%}](../images/2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg) + +*(This is a test run that I did inside of [Hive13][], by the way. I +used the CS-mount [ArduCam][] and its included lens. Shots were in a +14 x 4 grid and about 15 degrees apart. People and objects were +moving around inside the space at the time, which may account for some +weirdness...)* + +Though it certainly has gaps and seams, it's surprisingly coherent. +The curved-lines distortion in Hugin's GUI on the right is due to the +[projection][], and perfect optics and perfect positioning information +can't correct it. Do you recall learning in school that it's +impossible to put the globe of the world into a flat two-dimensional +map without distortion? This is exactly the same problem - which is +likely why Hugin's GUI shows all the pictures mapped onto a globe on +the left. That's another topic completely though... + +Of course, Hugin pretty much automates the process of finding control +points, matching them, and then finding optimal positions for each +image, so that is what I did next. We can also look at these +positions directly in Hugin's GUI. The image below contains two +screenshots - on the left, the image positions from the stepper +motors, and on the right, the optimized positions that Hugin +calculated: + +[![Hugin comparison](../assets_external/2016-10-04-pi-pan-tilt-2/hugin-comparison.png){width=100%}](../assets_external/2016-10-04-pi-pan-tilt-2/hugin-comparison.png) + +They sort of match up, though pitch deviates a bit. I believe that's +because I shifted the pitch of the entire thing to straighten it out +(or perhaps it did this automatically to center it), but I haven't +examined this in detail yet. + +A full-resolution JPEG of the result (after automated stitching, +exposure fusion, lens correction, and so on), is linked below: + +[![Hive13 full](../assets_external/2016-10-04-pi-pan-tilt-2/hive13-20161004-fused-smaller.jpg){width=100%}](../assets_external/2016-10-04-pi-pan-tilt-2/hive13-20161004-fused.jpg) + +It's 91 megapixels. The full TIFF image is 250 MB, so understandably, +I didn't feel like hosting it, particularly when it's not the +prettiest photo or the most technically-perfect one (it's full of lens +flare, chromatic aberration, overexposure, noise, and the occasional +stitching artifact). + +However, you can look up close and see how well the details came +through - which I find pretty impressive for cheap optics and a cheap +sensor. TODO: -- We're using Panotools and our apparatus together; they can - cross-check each other. -- Conversion for steppers; axes that Hugin/Panotools use and what I - use -- dcraw conversion? +- This was done completely with a raw workflow, blah blah blah +- How did I wire the steppers, vs. how does Hugin see things? [ArduCam]: http://www.arducam.com/camera-modules/raspberrypi-camera/ [forum-raw-images]: https://www.raspberrypi.org/forums/viewtopic.php?p=357138 @@ -84,4 +126,7 @@ TODO: [steps]: https://arduino-info.wikispaces.com/SmallSteppers?responseToken=04cbc07820c67b78b09c414cd09efa23f [SIFT]: https://en.wikipedia.org/wiki/Scale-invariant_feature_transform [RANSAC]: https://en.wikipedia.org/wiki/RANSAC +[hive13]: http://hive13.org/ +[projection]: http://wiki.panotools.org/Projections +[szeliski]: http://szeliski.org/Book/ [pto]: ??? diff --git a/site.hs b/site.hs index b834cc4..cb68517 100644 --- a/site.hs +++ b/site.hs @@ -35,6 +35,10 @@ main = hakyll $ do match "assets/**" $ do route idRoute compile copyFileCompiler + + match "assets_external/**" $ do + route idRoute + compile copyFileCompiler match "images/favicons/*" $ do route $ customRoute $ takeFileName . toFilePath