Moving various things around and fixing some links

This commit is contained in:
Chris Hodapp
2020-04-29 19:18:58 -04:00
parent bb2cba781b
commit e3506f9f4c
2030 changed files with 5995 additions and 87 deletions

View File

@@ -6,6 +6,8 @@ tags:
- processing
---
{{< load-photoswipe >}}
I first dabbled with
[Diffusion-Limited Aggregation](http://en.wikipedia.org/wiki/Diffusion-limited_aggregation)
algorithms some 5 years back when I read about them in a book (later
@@ -19,7 +21,7 @@ like this:
<!-- TODO: Originally:
[![Don't ask for the source code to this](../images/dla2c.png){width=50%}](../images/dla2c.png)\
-->
![Diffusion Limited Aggregation](./dla2c.png "Don't ask for the source code to this")
{{< figure resource="dla2c.png" title="Diffusion Limited Aggregation" caption="Don't ask for the source code to this">}}
After about 3 or 4 failed attempts to optimize this program to not
take days to generate images, I finally rewrote it reasonably

View File

@@ -7,6 +7,8 @@ tags:
- blender
---
{{< load-photoswipe >}}
This is about the tenth time I've tried to learn
[Blender](http://www.blender.org/). Judging by the notes I've
accumulated so far, I've been at it this time for about a month and a
@@ -60,9 +62,9 @@ too-many-completely-different-versions of Acidity I wrote.
[![This was made directly from some equations. I don't know how I'd do this in Blender.](../images/20110118-sketch_mj2011016e.jpg){width=100%}](../images/20110118-sketch_mj2011016e.jpg)
-->
![Hive13 bezier splines](./hive13-bezier03.png "What I learned Bezier splines on, and didn't learn enough about texturing.")
{{< figure resource="hive13-bezier03.png" title="Hive13 bezier splines" caption="What I learned Bezier splines on, and didn't learn enough about texturing.">}}
![Processing sketch](./20110118-sketch_mj2011016e.jpg "This was made directly from some equations. I don't know how I'd do this in Blender.")
{{< figure resource="20110118-sketch_mj2011016e.jpg" title="Processing sketch" caption="This was made directly from some equations. I don't know how I'd do this in Blender.">}}
[POV-Ray](http://www.povray.org) was the last program that I
3D-rendered extensively in (this was mostly 2004-2005, as my
@@ -98,6 +100,6 @@ all the precision that I would have had in POV-Ray, but I built them
in probably 1/10 the time. That's the case for the two
work-in-progress Blender images here:
![20110131-mj20110114b](./20110131-mj20110114b.jpg "This needs a name and a better background")
{{< figure resource="20110131-mj20110114b.jpg" title="20110131-mj20110114b" caption="This needs a name and a better background">}}
![20110205-mj20110202-starburst2](./20110205-mj20110202-starburst2.jpg "This needs a name and a better background.")
{{< figure resource="20110205-mj20110202-starburst2.jpg" title="20110205-mj20110202-starburst2" caption="This needs a name and a better background.">}}

View File

@@ -8,6 +8,8 @@ tags:
- Technobabble
---
{{< load-photoswipe >}}
After finally deciding to look around for some projects on GitHub, I
found a number of very interesting ones in a matter of minutes.
@@ -26,7 +28,7 @@ probably about 30 minutes to put together the code to generate the
usual gawdy test algorithm I try when bootstrapping from a new
environment:
![Standard trippy image](./acidity-standard.png)
{{< figure resource="acidity-standard.png" title="Standard trippy image">}}
(Yeah, it's gaudy. But when you see it animated, it's amazingly trippy
and mesmerizing.)

View File

@@ -8,6 +8,8 @@ tags:
- Technobabble
---
{{< load-photoswipe >}}
My [last post](./2011-08-27-isolated-pixel-pushing.html) mentioned a
program called [Context Free](http://www.contextfreeart.org/) that I
came across via the [Syntopia](http://blog.hvidtfeldts.net/) blog as
@@ -29,14 +31,14 @@ I downloaded the program, started it, and their welcome image (with
the relatively short source code right beside it) greeted me, rendered
on-the-spot:
![welcome.png](./welcome.png)
{{< figure resource="welcome.png" title="welcome.png">}}
The program was very easy to work with. Their quick reference card was
terse but only needed a handful of examples and a few pages of
documentation to fill in the gaps. After about 15 minutes, I'd put
together this:
![spiral-first-20110823.png](./spiral-first-20110823.png)
{{< figure resource="spiral-first-20110823.png" title="spiral-first-20110823.png">}}
Sure, it's mathematical and simple, but I think being able to put it
together in 15 minutes in a general program (i.e. not a silly ad-hoc
@@ -69,9 +71,9 @@ rule SQUARE1 {
I worked with it some more the next day and had some things like this:
![tree3-abg.png](./tree3-abg.png)
{{< figure resource="tree3-abg.png" title="tree3-abg.png">}}
![tree4-lul.png](./tree4-lul.png)
{{< figure resource="tree4-lul.png" title="tree4-lul.png">}}
I'm not sure what it is. It looks sort of like a tree made of
lightning. Some Hive13 people said it looks like a lockpick from

View File

@@ -8,6 +8,8 @@ tags:
- image_compression
---
{{< load-photoswipe >}}
*(This is a modified version of what I wrote up at work when I saw
that progressive JPEGs could be nearly a drop-in replacement that
offered some additional functionality and ran some tests on this.)*
@@ -356,13 +358,15 @@ Examples
Here are all 10 scans from a standard progressive JPEG, separated out with the example code:
![Scan 1](./cropphoto1.png)
![Scan 2](./cropphoto2.png)
![Scan 3](./cropphoto3.png)
![Scan 4](./cropphoto4.png)
![Scan 5](./cropphoto5.png)
![Scan 6](./cropphoto6.png)
![Scan 7](./cropphoto7.png)
![Scan 8](./cropphoto8.png)
![Scan 9](./cropphoto9.png)
![Scan 10](./cropphoto10.png)
{{< gallery >}}
{{< figure resource="cropphoto1.png" title="Scan 1">}}
{{< figure resource="cropphoto2.png" title="Scan 2">}}
{{< figure resource="cropphoto3.png" title="Scan 3">}}
{{< figure resource="cropphoto4.png" title="Scan 4">}}
{{< figure resource="cropphoto5.png" title="Scan 5">}}
{{< figure resource="cropphoto6.png" title="Scan 6">}}
{{< figure resource="cropphoto7.png" title="Scan 7">}}
{{< figure resource="cropphoto8.png" title="Scan 8">}}
{{< figure resource="cropphoto9.png" title="Scan 9">}}
{{< figure resource="cropphoto10.png" title="Scan 10">}}
{{< /gallery >}}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 MiB

View File

@@ -8,6 +8,8 @@ tags:
- raspberrypi
---
{{< load-photoswipe >}}
Earlier this year I was turning around ideas in my head - perhaps
inspired by Dr. Essa's excellent class,
[CS6475: Computational Photography][cs6475] - about the possibility of
@@ -42,7 +44,7 @@ I eventually had something mostly out of laser-cut plywood, hardware
store parts, and [cheap steppers][steppers]. It looks something like
this, mounted on a small tripod:
![png](./IMG_20160912_144539.jpg)
{{< figure page="images" resource="2016-09-25-pi-pan-tilt-1/IMG_20160912_144539.jpg" >}}
I am able to move the steppers thanks to [Matt's code][raspi-spy] and
capture images with [raspistill][]. The arrangement here provides two
@@ -64,7 +66,7 @@ picked up a [25mm M12 lens][25mm-lens] - still an angle of view of
about 10 degrees on this sensor - and set it up in the park for a test
run:
![](./IMG_20160918_160857.jpg "My shot's not slanted, the ground is")
{{< figure page="images" resource="2016-09-25-pi-pan-tilt-1/IMG_20160918_160857.jpg" caption="My shot's not slanted, the ground is">}}
(*Later note*: I didn't actually use the 25mm lens on that shot. I
used a 4mm (or something) lens that looks pretty much the same, and
@@ -99,7 +101,7 @@ sign.
The first results look decent, but fuzzy, as $10 optics are prone to
produce:
[![](./zwIJpFn.jpg)](./zwIJpFn.jpg)
{{< figure page="images" resource="2016-09-25-pi-pan-tilt-1/zwIJpFn.jpg" >}}
Follow along to [part 2](./2016-10-04-pi-pan-tilt-2.html).

View File

@@ -8,6 +8,8 @@ tags:
- raspberrypi
---
{{< load-photoswipe >}}
In my [last post](./2016-09-25-pi-pan-tilt-1.html) I introduced some
of the project I've been working on. This post is a little more
technical; if you don't care, and just want to see a 91 megapixel
@@ -77,7 +79,7 @@ With no refining, tweaking, or optimization, only the per-image
stepper motor positions and my guess at the lens's FOV, here is how
this looks in Hugin's fast preview:
[![Hive13](../images/2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg){width=100%}](../images/2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg)
{{< figure page="images" resource="2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg" caption="Hive13" >}}
*(This is a test run that I did inside of [Hive13][], by the way. I
used the CS-mount [ArduCam][] and its included lens. Shots were in a
@@ -102,7 +104,7 @@ screenshots - on the left, the image positions from the stepper
motors, and on the right, the optimized positions that Hugin
calculated:
[![Hugin comparison](../assets_external/2016-10-04-pi-pan-tilt-2/hugin-comparison.png){width=100%}](../assets_external/2016-10-04-pi-pan-tilt-2/hugin-comparison.png)
{{< figure page="images" resource="2016-10-04-pi-pan-tilt-2/hugin-comparison.png" caption="Hugin comparison" >}}
They sort of match up, though pitch deviates a bit. I believe that's
because I shifted the pitch of the entire thing to straighten it out,
@@ -120,18 +122,23 @@ A full-resolution JPEG of the result after automated stitching,
exposure fusion, lens correction, and so on, is below in this handy
zoomable viewer using [OpenSeadragon][]:
{{< rawhtml >}}
<div id="openseadragon1" style="width: 100%; height: 600px;"></div>
<script src="../js/openseadragon/openseadragon.min.js"></script>
<script src="/js/openseadragon/openseadragon.min.js"></script>
<script type="text/javascript">
var viewer = OpenSeadragon({
id: "openseadragon1",
prefixUrl: "../js/openseadragon/images/",
tileSources: "../assets_external/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.dzi"
prefixUrl: "/js/openseadragon/images/",
tileSources: "../../images/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.dzi"
});
</script>
{{< /rawhtml >}}
<!-- TODO: Can I get these references right somehow? I should be
getting at the link as a Page Resource -->
It's 91.5 megapixels; if the above viewer doesn't work right, a
[full-resolution JPEG](../assets_external/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.jpg)
[full-resolution JPEG](../../images/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.jpg)
is available too. The full TIFF image is 500 MB, so understandably, I
didn't feel like hosting it, particularly when it's not the prettiest
photo or the most technically-perfect one (it's full of lens flare,

View File

@@ -8,6 +8,8 @@ tags:
- raspberrypi
---
{{< load-photoswipe >}}
This is the third part in this series, continuing on from
[part 1][part1] and [part 2][part2]. The last post was about
integrating the hardware with Hugin and PanoTools. This one is
@@ -143,18 +145,18 @@ provide, I took 9 example shots of the same scene, ranging from about
full-resolution JPEG image of these shots, normalized - in effect,
trying to re-expose them properly:
[![](../images/2016-10-12-pi-pan-tilt-3/tile_jpg.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_jpg.jpg)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/tile_jpg.jpg" >}}
The below contains the raw sensor data, turned to 8-bit TIFF and then
again normalized. It's going to look different than the JPEG due to
the lack of whitebalance adjustment, denoising, brightness, contrast,
and so on.
[![](../images/2016-10-12-pi-pan-tilt-3/tile_8bit.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_8bit.jpg)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/tile_8bit.jpg" >}}
These were done with 16-bit TIFFs rather than 8-bit ones:
[![](../images/2016-10-12-pi-pan-tilt-3/tile_16bit.jpg){width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_16bit.jpg)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/tile_16bit.jpg" >}}
In theory, the 16-bit ones should be retaining two extra bits of data
from the 10-bit sensor data, and thus two extra stops of dynamic
@@ -174,7 +176,7 @@ I actually paid). Click the preview for a lossless PNG view, as JPEG
might eat some of the finer details, or [here][leaves-full] for the
full JPEG file (including raw, if you want to look around).
[![JPEG & raw comparison](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test_preview.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test.png)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/leaves_test.png" caption="JPEG & raw comparison" >}}
The JPEG image seems to have some aggressive denoising that cuts into
sharper detail somewhat, as denoising algorithms tends to do. Of
@@ -183,14 +185,14 @@ the same point, and then average them. That's only applicable in a
static scene with some sort of rig to hold things in place, which is
convenient, since that's what I'm making...
[![Shot setup](../assets_external/2016-10-12-pi-pan-tilt-3/IMG_20161016_141826_small.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/IMG_20161016_141826_small.jpg)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/IMG_20161016_141826.jpg" >}}
I used that (messy) test setup to produce the below comparison between
a JPEG image, a single raw image, 4 raw images averaged, and 16 raw
images averaged. These are again 1:1 crops from the center to show
noise and detail.
[![JPEG, raw, and averaging](../assets_external/2016-10-12-pi-pan-tilt-3/penguin_compare.jpg){width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/penguin_compare.png)
{{< figure page="images" resource="2016-10-12-pi-pan-tilt-3/penguin_compare.png" caption="JPEG, raw, and averaging">}}
Click for the lossless version, and take a look around finer details.
4X averaging has clearly reduced the noise from the un-averaged raw