diff --git a/posts/2016-10-04-pi-pan-tilt-2.md b/posts/2016-10-04-pi-pan-tilt-2.md index 97dae7f..80a3531 100644 --- a/posts/2016-10-04-pi-pan-tilt-2.md +++ b/posts/2016-10-04-pi-pan-tilt-2.md @@ -139,8 +139,8 @@ However, you can look up close and see how well the details came through - which I find quite impressive for cheap optics and a cheap sensor. -Further posts will follow on some other details, and hopefully other -images! +[Part 3](./2016-10-12-pi-pan-tilt-3.html) delves into the image +processing workflow. [ArduCam]: http://www.arducam.com/camera-modules/raspberrypi-camera/ [forum-raw-images]: https://www.raspberrypi.org/forums/viewtopic.php?p=357138 diff --git a/posts/2016-10-12-pi-pan-tilt-3.md b/posts/2016-10-12-pi-pan-tilt-3.md index 90d5ce1..1c6106d 100644 --- a/posts/2016-10-12-pi-pan-tilt-3.md +++ b/posts/2016-10-12-pi-pan-tilt-3.md @@ -8,7 +8,8 @@ tags: photography, electronics, raspberrypi This is the third part in this series, continuing on from [part 1][part1] and [part 2][]. The last post was about integrating the hardware with Hugin and PanoTools. This one is similarly -technical and without as many pretty pictures, so be forewarned. +technical, and without any pretty pictures (really, it has no concern +at all for aesthetics), so be forewarned. Thus far (aside from my first stitched image) I've been using a raw workflow where possible. That is, all images arrive from the camera @@ -156,26 +157,54 @@ These were done with 16-bit TIFFs rather than 8-bit ones: In theory, the 16-bit ones should be retaining two extra bits of data from the 10-bit sensor data, and thus two extra stops of dynamic range, that the 8-bit image cannot keep. I can't see the slightest -difference myself. Perhaps those two bits are just well below the -noise floor; perhaps if I used a brighter scene, it would be more -apparent. +difference myself. Perhaps those two bits are below the noise floor; +perhaps if I used a brighter scene, it would be more apparent. Regardless, starting from raw sensor data rather than the JPEG image gets some additional dynamic range. That's hardly surprising - JPEG isn't really known for its faithful reproduction of darker parts of an -image. Regardless, bracketing exposures and merging them later (as -Hugin will do) is probably a better idea than pretending the sensor is -going to give 10 bits of resolution. +image. Here's another comparison, this time a 1:1 crop from the center of an -image (shot with [this lens][12-40mm], whose Amazon price mysteriously -is now $146 instead of the $23 I actually paid). Click for a lossless -PNG view, as JPEG might eat some of the finer details. +image (shot at 40mm with [this lens][12-40mm], whose Amazon price +mysteriously is now $146 instead of the $23 I actually paid). Click +the preview for a lossless PNG view, as JPEG might eat some of the +finer details, or [here][leaves-full] for the full JPEG file +(including raw, if you want to look around). [![JPEG & raw comparison](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test_preview.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test.png) +The JPEG image seems to have some aggressive denoising that cuts into +sharper detail somewhat, as denoising algorithms tends to do. Of +course, another option exists too, which is to shoot many images from +the same point, and then average them. That's only applicable in a +static scene with some sort of rig to hold things in place, which is +convenient, since that's what I'm making... + +[![Shot setup](../assets_external/2016-10-12-pi-pan-tilt-3/IMG_20161016_141826_small.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/IMG_20161016_141826_small.jpg) + +I used that (messy) test setup to produce the below comparison between +a JPEG image, a single raw image, 4 raw images averaged, and 16 raw +images averaged. These are again 1:1 crops from the center to show +noise and detail. + +[![JPEG, raw, and averaging](../assets_external/2016-10-12-pi-pan-tilt-3/penguin_compare.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/penguin_compare.png) + +Click for the lossless version, and take a look around finer details. +4X averaging has clearly reduced the noise from the un-averaged raw +image, and possibly has done better than the JPEG image in that regard +while having clearer details. The 16X definitely has. + +Averaging might get us the full 10 bits of dynamic range by cleaning +up the noise. However, if we're able to shoot enough images at +exactly the same exposure to average them, then we could also shoot +them at different exposures (i.e. [bracketing][]), merge them into an +HDR image (or [fuse them][exposure fusion]), and get well outside of +that limited dynamic range while still having much of that same +averaging effect. + I'll cover the remaining two steps I noted - Hugin & PanoTools -stitching, and postprocessing - in the next post. +stitching and HDR merging, and postprocessing - in the next post. [part1]: ./2016-09-25-pi-pan-tilt-1.html [part2]: ./2016-10-04-pi-pan-tilt-2.html @@ -197,3 +226,6 @@ stitching, and postprocessing - in the next post. [picamera-raw]: https://picamera.readthedocs.io/en/release-1.10/recipes2.html#bayer-data [picamera]: https://www.raspberrypi.org/documentation/usage/camera/python/README.md [12-40mm]: https://www.amazon.com/StarDot-Vari-Focal-Camera-Lens-Black/dp/B00IPR1YSC +[leaves-full]: ../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test_full.jpg +[exposure fusion]: https://en.wikipedia.org/wiki/Exposure_Fusion +[bracketing]: https://en.wikipedia.org/wiki/Bracketing