Progressed in pi-pan-tilt post part 3; added example images

This commit is contained in:
Chris Hodapp 2016-10-15 22:40:01 -04:00
parent 92b4bb6fc6
commit a179e11165
4 changed files with 100 additions and 38 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

View File

@ -1,15 +1,14 @@
--- ---
title: Pi pan-tilt for huge images, part 3: Image processing & raw workflow title: Pi pan-tilt for huge images, part 3: ArduCam & raw images
author: Chris Hodapp author: Chris Hodapp
date: October 4, 2016 date: October 12, 2016
tags: photography, electronics, raspberrypi tags: photography, electronics, raspberrypi
--- ---
This is the third part in this series, continuing on from This is the third part in this series, continuing on from
[part 1][part1] and [part 2][]. The last post was about integrating [part 1][part1] and [part 2][]. The last post was about integrating
the hardware with Hugin and PanoTools. This post still involves those the hardware with Hugin and PanoTools. This one is similarly
tools somewhat, but is more about the image processing pipeline I've technical and without as many pretty pictures, so be forewarned.
been using.
Thus far (aside from my first stitched image) I've been using a raw Thus far (aside from my first stitched image) I've been using a raw
workflow where possible. That is, all images arrive from the camera workflow where possible. That is, all images arrive from the camera
@ -23,7 +22,7 @@ format. To list out some typical steps in this:
[OpenEXR][] file (for [high dynamic range][hdr]). [OpenEXR][] file (for [high dynamic range][hdr]).
- Import into something like [darktable][] for postprocessing. - Import into something like [darktable][] for postprocessing.
This is possibly overkill. I do it anyway. I deal mostly with the first two here.
# Acquiring Images # Acquiring Images
@ -40,35 +39,41 @@ that is within double the price and interfaces directly with a
computer of some kind (USB webcams and the like), I think you'll find computer of some kind (USB webcams and the like), I think you'll find
it quite impressive: it quite impressive:
- It comes in versions with CS lens mount, C mount, and M12 mount. - It has versions in three lens mounts: CS, C, and M12. CS-mount and
CS-mount and C-mount lenses are plentiful from their existing use in C-mount lenses are plentiful from their existing use in security
security cameras, generally inexpensive, and generally good-quality cameras, generally inexpensive, and generally good enough quality
(and for a bit extra, ones are available with (and for a bit extra, ones are available with
electrically-controllable apertures and focus). M12 lenses electrically-controllable apertures and focus). M12 lenses (or
are... plentiful and inexpensive, at least. I'll probably go into "board lenses") are... plentiful and inexpensive, at least. I'll
more detail on optics in a later post. probably go into more detail on optics in a later post.
- [raspistill][] will provide 10-bit raw images straight off the - 10-bit raw Bayer data straight off the sensor is available (see
sensor (see its `--raw` option). Thus, we can bypass all of the [raspistill][] and its `--raw` option, or how
[picamera][picamera-raw] does it). Thus, we can bypass all of the
automatic brightness, sharpness, saturation, contrast, and automatic brightness, sharpness, saturation, contrast, and
whitebalance correction which are great for snapshots and video, but whitebalance correction which are great for snapshots and video, but
really annoying for composite images. really annoying for composite images.
- Likewise via [raspistill][], we may directly set the ISO speed and - Likewise via [raspistill][], we may directly set the ISO speed and
the shutter time in microseconds, bypassing all automatic exposure the shutter time in microseconds, bypassing all automatic exposure
control. control.
- It has a variety of features pertaining to video, none of which I
care about for this application. Go look in [picamera][] for the
details.
I'm mostly using the CS-mount version, which came with a lens that is I'm mostly using the CS-mount version, which came with a lens that is
surprisingly sharp. (TODO: Put example images here.) If anyone can surprisingly sharp. If anyone can tell me how to do better for $30
tell me how to surpass all this for $30 (perhaps with those GoPro (perhaps with those GoPro knockoffs that are emerging?), please tell
knockoffs that are emerging), please tell me. me.
Reading raw images from the Raspberry Pi cameras is a little more Reading raw images from the Raspberry Pi cameras is a little more
convoluted, and I suspect that this is just how the CSI-2 pathway for convoluted, and I suspect that this is just how the CSI-2 pathway for
imaging works on the Raspberry Pi. In short: pass the `--raw` option imaging works on the Raspberry Pi. In short: It produces a JPEG file
to raspistill, and it will produce a JPEG file which contains a which contains a normal, lossy image, followed by a binary dump of the
normal, lossy image, followed by a binary dump of 10-bit raw Bayer raw sensor data, not as metadata, not as JPEG data, just... dumped
data from the sensor. after the JPEG data. *(Where I refer to "JPEG image" here, I'm
referring to actual JPEG-encoded image data, not the binary dump stuck
inside something that is coincidentally a JPEG file.)*
Most of my image captures then are with something like: Most of my image captures were with something like:
raspistill --raw -t 1 -w 640 -h 480 -ss 1000 -ISO 100 -o filename.jpg raspistill --raw -t 1 -w 640 -h 480 -ss 1000 -ISO 100 -o filename.jpg
@ -79,6 +84,21 @@ saving only a much-reduced JPEG as a thumbnail of the raw data, rather
than wasting the disk space and I/O on larger JPEG data than I'll use. than wasting the disk space and I/O on larger JPEG data than I'll use.
`-ss 1000` is for a 1000 microsecond exposure (thus 1 millisecond), `-ss 1000` is for a 1000 microsecond exposure (thus 1 millisecond),
and `-ISO 100` is for ISO 100 speed (the lowest this sensor will do). and `-ISO 100` is for ISO 100 speed (the lowest this sensor will do).
Note that we may also remove the `-ss` option and instead `-set` to
get lines like:
mmal: Exposure now 10970, analog gain 256/256, digital gain 256/256
mmal: AWB R=330/256, B=337/256
That 10970 is the shutter speed, again in microseconds, according to
the camera's metering. Analog and digital gain relate to ISO, but
only somewhat indirectly; setting ISO will result in changes to both,
and from what I've read, they both equal 1 if the ISO speed is 100.
I just switched my image captures to use [picamera][] rather than
`raspistill`. They both are fairly thin wrappers on top of the
hardware; the only real difference is that picamera exposes things via
a Python API rather than a commandline tool.
# Converting Raw Images # Converting Raw Images
@ -97,26 +117,65 @@ dcraw to read the resultant raw images, which I fixed with a
[trivial patch][dcraw-pr]. However, that board had other problems, so [trivial patch][dcraw-pr]. However, that board had other problems, so
I'm no longer using it. (TODO: Explain those problems.) I'm no longer using it. (TODO: Explain those problems.)
- TODO: Give dcraw options and why My conversion step is something like:
- TODO: 8-bit vs. 16-bit
- TODO: Example of 10-bit raw (does it get anything?)
# Hugin & PanoTools Stitching dcraw -T -W *.jpg
- Alongside exposure fusion, Hugin supports merging exposures into `-T` writes a TIFF and passes through metadata `-W` tells dcraw to
HDR. Interestingly, this doesn't require bracketing. If the scene leave the brightness alone; I found out the hard way that leaving this
contains a wide range of lighting across a larger area, and out would lead to some images with mangled exposures. From here,
individual shots are at different exposures to account for this, dcraw produces a `.tiff` for each `.jpg`. We can, if we wish, use all
this may still require HDR. (TODO: Example of this) of that 10-bit range by using `-6` to make a 16-bit TIFF rather than
- Bracketing might still be a requirement, however, if a single shot an 8-bit one. In my own tests, though, it makes no difference
spans too wide an exposure range. Do keep in mind that this sensor whatsoever because of the sensor's noisiness.
provides only 10 bits and is noisy.
# Postprocessing We can also rotate the image at this step, but I prefer to instead add
this as an initial roll value of -90, 90, or 180 degrees when creating
the PTO file. This keeps the lens parameters correct if, for
instance, we already have computed a distortion model of a lens.
- darktable already uses floating-point internally for everything, so To give an example of the little bit of extra headroom that raw images
it gladly handles [OpenEXR][] files (which store pixel values in provide, I took 9 example shots of the same scene, ranging from about
floating point). -1.0 underexposed down to -9.0 underexposed. The first grid is the
full-resolution JPEG image of these shots, normalized - in effect,
trying to re-expose them properly:
[![](../images/2016-10-12-pi-pan-tilt-3/tile_jpg.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_jpg.jpg)
The below contains the raw sensor data, turned to 8-bit TIFF and then
again normalized. It's going to look different than the JPEG due to
the lack of whitebalance adjustment, denoising, brightness, contrast,
and so on.
[![](../images/2016-10-12-pi-pan-tilt-3/tile_8bit.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_8bit.jpg)
These were done with 16-bit TIFFs rather than 8-bit ones:
[![](../images/2016-10-12-pi-pan-tilt-3/tile_16bit.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_16bit.jpg)
In theory, the 16-bit ones should be retaining two extra bits of data
from the 10-bit sensor data, and thus two extra stops of dynamic
range, that the 8-bit image cannot keep. I can't see the slightest
difference myself. Perhaps those two bits are just well below the
noise floor; perhaps if I used a brighter scene, it would be more
apparent.
Regardless, starting from raw sensor data rather than the JPEG image
gets some additional dynamic range. That's hardly surprising - JPEG
isn't really known for its faithful reproduction of darker parts of an
image. Regardless, bracketing exposures and merging them later (as
Hugin will do) is probably a better idea than pretending the sensor is
going to give 10 bits of resolution.
Here's another comparison, this time a 1:1 crop from the center of an
image (shot with [this lens][12-40mm], whose Amazon price mysteriously
is now $146 instead of the $23 I actually paid). Click for a lossless
PNG view, as JPEG might eat some of the finer details.
[![JPEG & raw comparison](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test_preview.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test.png)
I'll cover the remaining two steps I noted - Hugin & PanoTools
stitching, and postprocessing - in the next post.
[part1]: ./2016-09-25-pi-pan-tilt-1.html [part1]: ./2016-09-25-pi-pan-tilt-1.html
[part2]: ./2016-10-04-pi-pan-tilt-2.html [part2]: ./2016-10-04-pi-pan-tilt-2.html
@ -135,3 +194,6 @@ I'm no longer using it. (TODO: Explain those problems.)
[forum2]: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=92562 [forum2]: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=92562
[dcraw-6by9]: https://github.com/6by9/RPiTest/tree/master/dcraw [dcraw-6by9]: https://github.com/6by9/RPiTest/tree/master/dcraw
[dcraw-pr]: https://github.com/6by9/RPiTest/pull/1 [dcraw-pr]: https://github.com/6by9/RPiTest/pull/1
[picamera-raw]: https://picamera.readthedocs.io/en/release-1.10/recipes2.html#bayer-data
[picamera]: https://www.raspberrypi.org/documentation/usage/camera/python/README.md
[12-40mm]: https://www.amazon.com/StarDot-Vari-Focal-Camera-Lens-Black/dp/B00IPR1YSC