Begin slow migration to Hugo...
This commit is contained in:
162
hugo_blag/content/posts/2009-10-15-fun-with-nx-stuff.md
Normal file
162
hugo_blag/content/posts/2009-10-15-fun-with-nx-stuff.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: Fun with NX stuff
|
||||
date: October 15, 2009
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- Technobabble
|
||||
---
|
||||
|
||||
So, I was trying out various NX servers because I'd had very good luck
|
||||
with NX in the past and generally found it faster than VNC, RDP, or
|
||||
X11 over SSH. My options appeared to be:
|
||||
|
||||
- NoMachine's server
|
||||
([here](http://www.nomachine.com/select-package.php?os=linux&id=1)),
|
||||
which is free-as-in-beer but supports only 2 simultaneous sessions.
|
||||
- [FreeNX](http://freenx.berlios.de/) made from the components that
|
||||
NoMachine GPLed. It's open souce, but apparently is a total mess and
|
||||
notoriously hard to set up. However, it doesn't limit you to two
|
||||
sessions, as far as I know.
|
||||
- [neatX](http://code.google.com/p/neatx/), implemented from scratch
|
||||
in Python/bash/C by Google for some internal project because
|
||||
apparently FreeNX was just too much of a mess. Like FreeNX, it lacks
|
||||
the two-session limitation; however, it doesn't handle VNC or RDP,
|
||||
only X11.
|
||||
|
||||
NoMachine's server was a cinch to set up (at least on Fedora). The
|
||||
only thing I remember having to do is put my local hostname (idiotbox)
|
||||
in `/etc/hosts`. Performance was very good (though I haven't tried
|
||||
RDP or VNC over a slower link yet - only a LAN with VirtualBox's
|
||||
built-in RDP server).
|
||||
|
||||
neatX was a bit tougher to set up, primarily because the documentation
|
||||
I saw was very sparse. This
|
||||
[blog post](http://people.binf.ku.dk/~hanne/b2evolution/blogs/index.php/2009/09/01/neatx-is-the-new-black)
|
||||
was helpful. It advised that you should make sure you could log in
|
||||
with SSH manually before checking anything else, which gave me a
|
||||
starting point for my problems.
|
||||
|
||||
I took these notes on how I made it work:
|
||||
|
||||
- Install all of the dependencies it says. ALL OF THEM!
|
||||
- Follow the other instructions in `INSTALL`.
|
||||
- Go to `/usr/local/lib/neatx` and run `./nxserver-login`. If it
|
||||
looks like this, you're probably good:
|
||||
|
||||
```bash
|
||||
[hodapp@idiotbox neatx]$ ./nxserver-login
|
||||
HELLO NXSERVER - Version 3.3.0 - GPL
|
||||
NX> 105
|
||||
```
|
||||
|
||||
If not, you may need to install some dependencies or check paths of
|
||||
some things. If it complains about not being able to import
|
||||
neatx.app, add something like this to the top of `nxserver-login`
|
||||
(replacing that path with your own if needed, of course):
|
||||
|
||||
```python
|
||||
import sys
|
||||
sys.path.append("/usr/local/lib/python2.6/site-packages")
|
||||
```
|
||||
|
||||
- Set up password-less login for user `nx` using something like
|
||||
`ssh-keygen -t rsa` and putting the private & public keys someplace
|
||||
easy to find. Check that this works properly from another host
|
||||
(i.e. put the public key in the server's `authorized_keys` file in
|
||||
`~nx/.ssh`, copy the private key to the client, and use `ssh -i
|
||||
blahblahprivatekey nx@server` there to log in. It should look
|
||||
something like this:
|
||||
|
||||
```bash
|
||||
chris@momentum:~$ ssh -i nx.key nx@10.1.1.40
|
||||
Last login: Sun Oct 11 13:11:49 2009 from 10.1.1.20
|
||||
HELLO NXSERVER - Version 3.3.0 - GPL
|
||||
NX> 105
|
||||
```
|
||||
|
||||
If it asks for a password, something's wrong.
|
||||
|
||||
If it terminates the connection immediately, SSH is probably okay, but
|
||||
something server-side with neatX is still messed up. SSH logs can
|
||||
sometimes tell things.
|
||||
|
||||
Once I'd done all this, neatX worked properly. However, I had some
|
||||
issues with it - for instance, sometimes the entire session quit
|
||||
accepting mouse clicks, certain windows quit accepting keyboard input,
|
||||
or things would turn very sluggish at random. But for the most part it
|
||||
worked well.
|
||||
|
||||
After setting up SSH stuff, FreeNX server worked okay from Fedora's
|
||||
packages after some minor hackery (i.e. setting user the login shell
|
||||
for user `nx` to `/usr/libexec/nx/nxserver`. I haven't yet had a
|
||||
chance to test it over a slow link, whether with X11 or RDP or VNC,
|
||||
but it worked in a LAN just fine. Someone in the IRC channel on
|
||||
FreeNode assures me that it runs flawlessly over a 256 kilobit link.
|
||||
|
||||
Then, for some reason I really don't remember, I decided I wanted to
|
||||
run all three servers at once on the same computer. As far as I know,
|
||||
all of the NX clients log in to the server initially by passing a
|
||||
private key for user `nx`. The server then runs the login shell
|
||||
set in `/etc/passwd` for `nx` - so I guess that shell determines
|
||||
which NX server handles the session.
|
||||
|
||||
So, amidst a large pile of bad ideas, I finally came up with this
|
||||
workable idea for making the servers coexist: I would set the login
|
||||
shell to a wrapper script which would choose the NX server to then
|
||||
run. The only data I could think of that the NX client could pass to
|
||||
the server were the port number and the private key, and this wrapper
|
||||
script would somehow have to get this data.
|
||||
|
||||
Utilizing the port number would probably involve hacking around with
|
||||
custom firewall rules or starting multiple SSH servers, so I opted to
|
||||
avoid this method. It turns out if you set `LogLevel` to `VERBOSE` in
|
||||
sshd_config (at least in my version), it'll have lines like this after
|
||||
every login from the NX client: ` Oct 14 18:11:33 idiotbox
|
||||
sshd[15681]: Found matching DSA key:
|
||||
fd:e9:5d:24:59:3c:3c:35:c5:29:74:ef:6d:92:3c:e4 ` You can get that key
|
||||
fingerprint with `ssh-keygen -lf foo.pub` where foo.pub is the public
|
||||
key.
|
||||
|
||||
So I generated 3 keys (one for neatX, NoMachine's server, and FreeNX),
|
||||
added them all to **authorized_keys**, found the fingerprints, and
|
||||
ended up with a script that was something like this:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
FINGERPRINT=$(grep "Found matching RSA key" /var/log/secure |
|
||||
tail -n 1 | egrep -o "(..:){15}..")
|
||||
if [ $FINGERPRINT == "26:dd:67:82:c1:2d:cc:c0:c6:13:ac:d4:49:0e:79:a3" ]; then
|
||||
SERVER="/usr/local/lib/neatx/nxserver-login-wrapper"
|
||||
elif [ $FINGERPRINT == "35:fb:bd:45:c5:71:91:ce:d6:d9:7f:0b:dc:84:f4:b3" ]; then
|
||||
SERVER="/usr/NX/bin/nxserver"
|
||||
elif [ $FINGERPRINT == "b5:d7:a5:18:0d:c4:fa:18:19:58:20:00:1d:3b:3c:84" ]; then
|
||||
SERVER="/usr/libexec/nx/nxserver"
|
||||
fi
|
||||
$SERVER
|
||||
```
|
||||
|
||||
I saved this someplace, set it executable, and set the login shell for
|
||||
`nx` in `/etc/passwd` to point to it. Make sure the home directory
|
||||
points someplace sensible too, as the install script for some NX
|
||||
servers are liable to point it somewhere else. But as far as I can
|
||||
tell, the only thing they use the home directories for is the `.ssh`
|
||||
directory and all the other data they save is in locations that do not
|
||||
conflict.So I copied the three public keys to the client and manually
|
||||
did `ssh -i blah.key nx@whatever` on each key.
|
||||
|
||||
```bash
|
||||
chris@momentum:~$ ssh -i freenx-key nx@10.1.1.40
|
||||
HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.3.0)
|
||||
NX> 105
|
||||
chris@momentum:~$ ssh -i neatx-key nx@10.1.1.40
|
||||
HELLO NXSERVER - Version 3.3.0 - GPL
|
||||
NX> 105
|
||||
chris@momentum:~$ ssh -i nomachine-key nx@10.1.1.40
|
||||
HELLO NXSERVER - Version 3.4.0-8 - LFE
|
||||
NX> 105
|
||||
```
|
||||
|
||||
The different versions in each reply were a good sign, so I tried the
|
||||
same keys in the client, and stuff indeed worked (at least according
|
||||
to my totally non-rigorous testing). Time will tell whether or not I
|
||||
completely overlooked some important details or interference.
|
||||
@@ -0,0 +1,55 @@
|
||||
---
|
||||
title: "Processing: DLA, quadtrees"
|
||||
date: July 4th, 2010
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- processing
|
||||
---
|
||||
|
||||
I first dabbled with
|
||||
[Diffusion-Limited Aggregation](http://en.wikipedia.org/wiki/Diffusion-limited_aggregation)
|
||||
algorithms some 5 years back when I read about them in a book (later
|
||||
note: that book was
|
||||
[Nexus: Small Worlds and the Groundbreaking Theory of Networks](http://www.amazon.com/Nexus-Worlds-Groundbreaking-Science-Networks/dp/0393324427?ie=UTF8&*Version*=1&*entries*=0)). The
|
||||
version I wrote was monumentally slow because it was a crappy
|
||||
implementation in a slow language for heavy computations
|
||||
(i.e. Python), but it worked well enough to create some good results
|
||||
like this:
|
||||
|
||||
[{width=50%}](../images/dla2c.png)\
|
||||
|
||||
After about 3 or 4 failed attempts to optimize this program to not
|
||||
take days to generate images, I finally rewrote it reasonably
|
||||
successfully in [Processing](http://processing.org/) which I've taken
|
||||
a great liking to recently. I say "reasonably successfully" because it
|
||||
still has some bugs and because I can't seem to tune it to produce
|
||||
lightning-like images like this one, just much more dense
|
||||
ones. Annoyingly, I did not keep any notes about how I made this
|
||||
image, so I have only a vague idea. It was from the summer of 2005 in
|
||||
which I coded eleventy billion really cool little generative art
|
||||
programs, but took very sparse notes about how I made them.
|
||||
|
||||
It was only a few hours of coding total. Part of why I like Processing
|
||||
is the triviality of adding interactivity to something, which I did
|
||||
repeatedly in order to test that the various building-blocks of the
|
||||
DLA implementation were working properly.
|
||||
|
||||
The actual DLA applet is at
|
||||
[http://openprocessing.org/visuals/?visualID=10799](http://openprocessing.org/visuals/?visualID=10799). Click
|
||||
around inside it; right-click to reset it. The various building blocks
|
||||
that were put together to make this are:
|
||||
[here](http://openprocessing.org/visuals/?visualID=10794),
|
||||
[here](http://openprocessing.org/visuals/?visualID=10795),
|
||||
[here](http://openprocessing.org/visuals/?visualID=10796),
|
||||
[here](http://openprocessing.org/visuals/?visualID=10797), and
|
||||
[here](http://openprocessing.org/visuals/?visualID=10798).
|
||||
|
||||
These are at OpenProcessing mostly because I don't know how to embed a
|
||||
Processing applet in Wordpress; perhaps it's better that I don't,
|
||||
since this one is a CPU hog. (*Later note:* I wonder if I can just
|
||||
host these examples inline using
|
||||
[Processing.js](http://processingjs.org/)...)
|
||||
|
||||
This blog also has an entire gallery of generative art with Processing
|
||||
that I think is great:
|
||||
[http://myecurve.wordpress.com/](http://myecurve.wordpress.com/)
|
||||
@@ -0,0 +1,97 @@
|
||||
---
|
||||
title: Blender from a recovering POV-Ray user
|
||||
date: February 7, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- CG
|
||||
- blender
|
||||
---
|
||||
|
||||
This is about the tenth time I've tried to learn
|
||||
[Blender](http://www.blender.org/). Judging by the notes I've
|
||||
accumulated so far, I've been at it this time for about a month and a
|
||||
half. From what I remember, what spurred me to try this time was
|
||||
either known-Blender-guru Craig from [Hive13](http://www.hive13.org/)
|
||||
mentioning
|
||||
[Voodoo Camera Tracker](http://www.digilab.uni-hannover.de/docs/manual.html)
|
||||
(which can output to a Blender-readable format), or my search for
|
||||
something that would make it easier to do the 2D visualizations and
|
||||
algorithmic art I always end up doing (and I heard Blender had some
|
||||
crazy node-based texturing system...
|
||||
|
||||
Having a goal for what I want to render has been working out much
|
||||
better than just trying to learn the program and hope the inspiration
|
||||
falls into place (like it would appear all of my previous attempts
|
||||
involved). This really has nothing to do with Blender specifically,
|
||||
but really anything that is suitably complex and powerful. I have just
|
||||
had this dumb tendency in the past few years to try to learn all of
|
||||
the little details of a system without first having a motivation to
|
||||
use them, despite this being completely at odds with nearly all things
|
||||
I consider myself to have learned well. I'm seeing pretty clearly how
|
||||
that approach is rather backwards, for me at least.
|
||||
|
||||
I took a lot of notes early on where I tried to map out a lot of its
|
||||
features at a very high level, but most of this simply didn't matter -
|
||||
what mattered mostly fell into place when I actually tried to make
|
||||
something in Blender. However, knowing some of the fundamental
|
||||
limitations and capabilities did help.
|
||||
|
||||
The interface is quirky for sure, but I am finding it to be pretty
|
||||
intuitive after some practice. Most of my issues came from the big UI
|
||||
overhaul after 2.4, as I'm currently using 2.55/2.56 but many of the
|
||||
tutorials refer to the old version, and even official documentation
|
||||
for 2.5 is sometimes nonexistent - but can I really complain? They
|
||||
pretty clearly note that it is still in beta.
|
||||
|
||||
However, I'm starting to make sense of it. Visions and concepts that I
|
||||
previously felt I had no idea how to even approach in Blender suddenly
|
||||
are starting to feel easy or at least straightforward (what I'm
|
||||
talking about more specifically here is how many things became trivial
|
||||
once I knew my way around Bezier splines). This is good, because I've
|
||||
got pages and pages of ideas waiting to be made. Some look like
|
||||
they'll be more suited to [Processing](http://processing.org/) (like
|
||||
the 2nd image down below) or
|
||||
[OpenFrameworks](http://www.openframeworks.cc/) or one of the
|
||||
too-many-completely-different-versions of Acidity I wrote.
|
||||
|
||||
[{width=100%}](../images/hive13-bezier03.png)
|
||||
|
||||
[{width=100%}](../images/20110118-sketch_mj2011016e.jpg)
|
||||
|
||||
[POV-Ray](http://www.povray.org) was the last program that I
|
||||
3D-rendered extensively in (this was mostly 2004-2005, as my
|
||||
much-neglected [DeviantArt](http://mershell.deviantart.com/) shows,
|
||||
and it probably stress-tested the Athlon64 in the first new machine I
|
||||
built more than any other program did). It's quite different from
|
||||
Blender in most ways possible. POV-Ray makes it easy to do clean,
|
||||
elegant, mathematical things, many of which would be either impossible
|
||||
or extremely ugly in Blender. It's a raytracer; it deals with neat,
|
||||
clean analytic surfaces, and tons of other things come for free (speed
|
||||
is not one of them). However, I never really found a modeler for
|
||||
POV-Ray that could integrate well with the full spectrum of features
|
||||
the language offered, and a lot of things just felt really
|
||||
kludgey. Seeing almost no progress made to the program, and being too
|
||||
lazy to look into [MegaPOV](http://megapov.inetart.net/), I decided to
|
||||
give up on it at some point. My attempts to learn something that
|
||||
implemented RenderMan resulted mostly in me seeing how ingeniously
|
||||
optimized and streamlined RenderMan is and not actually making
|
||||
anything in it.
|
||||
|
||||
Blender feels really "impure" in comparison. It deals with ugly things
|
||||
like triangle meshes and scanline rendering... ugly things that make
|
||||
it vastly more efficient to accomplish many tasks. I'm quickly finding
|
||||
better replacements for a lot of the techniques I relied on with
|
||||
POV-Ray. For instance, for many repetitive or recursive structures, I
|
||||
would rely on some simple looping or recursion in POV-Ray (as its
|
||||
scene language was Turing-complete); this worked fairly well, but it
|
||||
also meant that no modeler I tried would be able to grok the scene. In
|
||||
Blender, I discovered the Array modifier; while it's much simpler, it
|
||||
is still very powerful. On top of this, I have the interactivity of
|
||||
the modeler still present. I've built some things interactively with
|
||||
all the precision that I would have had in POV-Ray, but I built them
|
||||
in probably 1/10 the time. That's the case for the two
|
||||
work-in-progress Blender images here:
|
||||
|
||||
[{width=100%}](../images/20110131-mj20110114b.jpg)
|
||||
|
||||
[{width=100%}](../images/20110205-mj20110202-starburst2.jpg)
|
||||
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: I can never win that context back
|
||||
date: June 10, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- Journal
|
||||
- rant
|
||||
---
|
||||
|
||||
I stumbled upon this:
|
||||
[http://www.soyoucode.com/2011/coding-giant-under-microscope-farbrausch](http://www.soyoucode.com/2011/coding-giant-under-microscope-farbrausch)
|
||||
. . . and promptly fell in love with the demos there from Farbrausch:
|
||||
|
||||
[.the .product](http://www.youtube.com/watch?v=3ydAHt78v2M)
|
||||
|
||||
[.debris](http://www.youtube.com/watch?v=rBNZ9JiFCKU)
|
||||
|
||||
[.kkrieger](http://www.youtube.com/watch?v=3aV1kzS5FtA)
|
||||
|
||||
[Magellan](http://www.youtube.com/watch?v=00SdDZyWSEs)
|
||||
|
||||
That melding of music and animated 3D graphics grabs a hold of me like
|
||||
nothing else. I don't really know why.
|
||||
|
||||
The fact that it's done in such a small space (e.g. 64 KB for the
|
||||
first one) makes it more impressive, of course. Maybe that should be a
|
||||
sad reflection on just how formulaic the things I like are, if they're
|
||||
encoded that small (although, that ignores just how much is present in
|
||||
addition, in the CPU and the GPU and the OS and the drivers and in the
|
||||
design of the computer), but I don't much care - formulas encode
|
||||
patterns of sorts, and we're pattern-matching machines.
|
||||
|
||||
But leaving aside the huge programming feat of making all this fit in
|
||||
such a small space, I still find it really impressive.
|
||||
|
||||
It's been a goal for awhile to make something that is on the scope of
|
||||
that (highly-compressed demo or not, I don't much care). I've just not
|
||||
made much progress to accomplishing that. My early attempts at Acidity
|
||||
were motivated by the same feelings that draw me to things like this.
|
||||
|
||||
(Obligatory
|
||||
[Second Reality](http://www.youtube.com/watch?v=8G_aUxbbqWU) as
|
||||
well. Maybe I am putting myself too much in the context that it came
|
||||
from - i.e. 1993 and rather slow DOS machines - but I still think it's
|
||||
damn impressive. Incidentally, this is also one of the only ones I've
|
||||
run on real hardware before, since apparently the only fast machine I
|
||||
have that runs Windows is my work computer.)
|
||||
118
hugo_blag/content/posts/2011-06-13-openframeworks-try-1.md
Normal file
118
hugo_blag/content/posts/2011-06-13-openframeworks-try-1.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
title: OpenFrameworks, try 1...
|
||||
date: June 13, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- Technobabble
|
||||
- rant
|
||||
---
|
||||
|
||||
My attempts at doing things with
|
||||
[OpenFrameworks](http://openframeworks.cc/) on MacOS X have been
|
||||
mildly disastrous. This is a bit of a shame, because I was really
|
||||
starting to like OpenFrameworks and it was not tough to pick up after
|
||||
being familiar with [Processing](https://processing.org/).
|
||||
|
||||
I'm pretty new to XCode, but it's the "official" environment for
|
||||
OpenFrameworks on OS X, so it's the first thing I tried. The first few
|
||||
attempts at things (whether built-in examples, or my own code) went
|
||||
just fine, but today I started trying some things that were a little
|
||||
more complex - i.e. saving the last 30 frames from the camera and
|
||||
using them for some filtering operations. My code probably had some
|
||||
mistakes in it, I'm sure, and that's to be expected. The part where
|
||||
things became incredibly stupid was somewhere around when the mistakes
|
||||
caused the combination of XCode, GDB, and OpenFrameworks to hose the
|
||||
system in various ways.
|
||||
|
||||
First, it was the Dock taking between 15 and 30 seconds to respond
|
||||
just so I could force-quit the application. Then it was the debugger
|
||||
taking several seconds to do 100 iterations of a loop that had nothing
|
||||
more than an array member assignment inside of it (and it had
|
||||
640x480x3 = 921,600 iterations total) if I tried to set breakpoints,
|
||||
thus basically making interactive debugging impossible. The debugging
|
||||
was already a pain in the ass; I had reduced some code down to
|
||||
something like this:
|
||||
|
||||
```c
|
||||
int size = cam_width * cam_height * 3;
|
||||
for(int i = 0; i < frame_count; ++i) {
|
||||
unsigned char * blah = new unsigned char[size];
|
||||
for(int j = 0; j < size; ++j) blah[j] = 0;
|
||||
}
|
||||
```
|
||||
|
||||
...after a nearly identical `memset` call was smashing the stack and
|
||||
setting `frame_count` to a value in the billions, so I was really
|
||||
getting quite frazzled at this.
|
||||
|
||||
Running it a few minutes ago without breakpoints enabled led to a
|
||||
bunch of extreme sluggishness, then flickering and flashing on the
|
||||
monitor and I was not able to interact with anything in the GUI (which
|
||||
was the 3rd or 4th time this had happened today, with all the
|
||||
Code::Blocks nonsense below). I SSHed in from another machine and
|
||||
killed XCode, but the monitor just continued to show the same image,
|
||||
and it appeared that the GUI was completely unresponsive except for a
|
||||
mouse cursor. I had to hold the power button to reboot, and saw this
|
||||
in the Console but nothing else clear before it:
|
||||
|
||||
```
|
||||
6/13/11 1:11:19 AM [0x0-0x24024].com.google.Chrome[295] [463:24587:11560062687119:ERROR:gpu_watchdog_thread.cc(236)] The GPU process hung. Terminating after 10000 ms.
|
||||
```
|
||||
|
||||
A little before trying XCode for a 2nd time, I had also attempted to
|
||||
set up Code::Blocks since it's OpenFrameworks' "official" IDE for
|
||||
Linux and Windows and XCode was clearly having . First I painstakingly
|
||||
made it built from an SVN copy and finally got it to run (had to
|
||||
disable FileManager and NassiShneiderman plugins which would not build
|
||||
and make sure it was building for the same architecture as wxWidgets
|
||||
was built for). As soon as I tried to quit it, the Dock became totally
|
||||
unresponsive, then Finder itself followed along with the menu bar for
|
||||
the whole system. I was not able to SSH in. Despite the system being
|
||||
mostly responsive, I had to hard reset. I found a few things in the
|
||||
console:
|
||||
|
||||
```
|
||||
6/12/11 9:43:54 PM com.apple.launchd[1] (com.apple.coreservicesd[66]) Job appears to have crashed: Segmentation fault
|
||||
6/12/11 9:43:54 PM com.apple.audio.coreaudiod[163] coreaudiod: CarbonCore.framework: coreservicesd process died; attempting to reconnect but future use may result in erroneous behavior
|
||||
6/12/11 9:43:55 PM com.apple.ReportCrash.Root[18181] 2011-06-12 21:43:55.011 ReportCrash[18181:2803] Saved crash report for coreservicesd[66] version ??? (???) to /Library/Logs/DiagnosticReports/coreservicesd_2011-06-12-214355_localhost.crash
|
||||
6/12/11 9:44:26 PM com.apple.Dock.agent[173] Sun Jun 12 21:44:26 hodapple2.local Dock[173] Error: kCGErrorIllegalArgument: CGSSetWindowTransformsAtPlacement: Singular matrix at index 2: [0.000 0.000 0.000 0.000]
|
||||
```
|
||||
|
||||
It started up properly after a reset, but I couldn't do anything
|
||||
useful with it because despite there being a script that was supposed
|
||||
to take care of this while building the bundle the application was not
|
||||
able to see any of its plugins, which included a compiler plugin. I
|
||||
attempted a binary OS X release which had a functioning set of
|
||||
plugins, but was missing other dependencies set in the projects, which
|
||||
were Linux-specific. I could probably put together a working
|
||||
configuration if I worked in Code::Blocks a bit, but I have not tried
|
||||
yet.
|
||||
|
||||
This is all incredibly annoying. There is no reason a user process
|
||||
should be capable of taking down the whole system like this,
|
||||
especially inside of a debugger, yet apparently it's pretty trivial to
|
||||
make this happen. I've written more than enough horrible code in
|
||||
various different environments (CUDA-GDB on a Tesla C1060, perhaps?)
|
||||
to know what to expect. I guess I can try developing on Linux instead,
|
||||
and/or using Processing. I know it's not quite the same, but I've
|
||||
never had a Processing sketch hose the whole system at least.
|
||||
|
||||
*Later addition (2011-06-20, but not written here until November because I'd buried the notes somewhere):*
|
||||
|
||||
I attempted to make an OpenFrameworks project built with Qt Creator
|
||||
(which of course uses
|
||||
[QMake](http://doc.qt.nokia.com/latest/qmake-manual.html). OpenFrameworks
|
||||
relies on QuickTime, and as it happens, QuickTime is 32-bit only. If
|
||||
you take a look at some of the headers, the majority of it is just
|
||||
#ifdef'ed away if you try to build 64-bit and this completely breaks
|
||||
the OpenFrameworks build.
|
||||
|
||||
Ordinarily, this would not be an issue as I would just do a 32-bit
|
||||
build of everything else too. However, QMake refuses to do a 32-bit
|
||||
build on OS X for some unknown reason (and, yes, I talked to some Qt
|
||||
devs about this). It'll gladly do it on most other platforms, but not
|
||||
on OS X. Now, GCC has no problems building 32-bit, but this does no
|
||||
good when QMake keeps adding `-arch x86_64` no matter what. I
|
||||
attempted all sorts of options such as `CONFIG += x86`, `CONFIG -=
|
||||
x86_64`, `QMAKE_CXXFLAGS -= -arch x86_64`, or `+= -m32`, or `+= -arch i386`...
|
||||
but none of them to any avail.
|
||||
@@ -0,0 +1,127 @@
|
||||
---
|
||||
title: My experiences with Apache Axis2/C
|
||||
date: July 15, 2011
|
||||
tags:
|
||||
- Project
|
||||
- rant
|
||||
- Technobabble
|
||||
author: Chris Hodapp
|
||||
---
|
||||
|
||||
(This is an abridged version of a report I did at my job; I might post
|
||||
of copy of it once I remove anything that might be considered
|
||||
proprietary.)
|
||||
|
||||
I was tasked at my job with looking at ways of doing web services in
|
||||
our main application (which for an upcoming delivery is to be
|
||||
separated out into client and server portions). Said application is
|
||||
written primarily in C++, so naturally our first look was into
|
||||
frameworks written for C or C++ so that we would not need to bother
|
||||
with language bindings, foreign function interfaces, porting, new
|
||||
runtimes, or anything of the sort.
|
||||
|
||||
Our search led us to
|
||||
[Apache Axis2/C](http://axis.apache.org/axis2/c/core/). We'd examined
|
||||
this last year at a basic level and found that it looked suitable. Its
|
||||
primary intended purpose was as the framework that the client and
|
||||
server communicated over in order to transfer our various DTOs; that
|
||||
it worked over SOAP and handled most HTTP details (so it appeared) was
|
||||
a bonus.
|
||||
|
||||
I discovered after investing considerable effort that we were quite
|
||||
wrong about Axis2/C. I'll enumerate a partial list of issues here:
|
||||
|
||||
- **Lack of support:** There was a distinct lack of good information
|
||||
online. I could find no real record of anyone using this framework
|
||||
in production anywhere. Mailing lists and message forums seemed
|
||||
nonexistent. I found a number of articles that were often pretty
|
||||
well-written, but almost invariably by WSO2 employees.
|
||||
- **Development is largely stagnant:** The last update was in 2009. In
|
||||
and of itself this is not a critical issue, but combined with its
|
||||
extensive list of unsolved bugs and a very dense, undocumented code
|
||||
base, this is unacceptable.
|
||||
- **Lack of documentation:** Some documentation is online, but the
|
||||
vast majority of the extensive API lacks any documentation, whether
|
||||
a formal reference or a set of examples. The most troubling aspect
|
||||
of this is that not even the developers of Axis2/C seemed to
|
||||
comprehend its memory management (and indeed our own tests showed
|
||||
some extensive memory leaks).
|
||||
- **Large set of outstanding bugs:** When I encountered the
|
||||
bug-tracking website for Axis2/C (which I seem to have lost the link
|
||||
for), I discovered a multitude of troubling bugs. Most of them
|
||||
pertain to unfixed memory leaks (for code that will be running
|
||||
inside of a web server, this is really not good). On top of this, a
|
||||
2-year-old unfixed bug had broken the functionality for binary MTOM
|
||||
transfers if you had enabled libcurl support, and this feature was
|
||||
rather essential to the application.
|
||||
- **Necessity of repetitive code:** It lacked any production-ready
|
||||
means to automatically generate code for turning native C/C++
|
||||
objects to and from SOAP. While it had WSDL2C, this still left
|
||||
considerable repetitive work for the programmer (in many cases
|
||||
causing more work rather than less) and its generated code was very
|
||||
ambiguous as to its memory-management habits.
|
||||
- **Limited webserver support:** Axis2/C provided modules only for
|
||||
working with three web servers: Apache HTTPD, Microsoft IIS, and
|
||||
their built-in test server, *axis2_http_server*. Our intended target
|
||||
was Microsoft IIS, and the support for IIS was considerably less
|
||||
developed than the support for Apache HTTPD. To be honest, though,
|
||||
most of my woes came from Microsoft here - and the somewhat pathetic
|
||||
functionality for logging and configuration that IIS has. I'm sorry
|
||||
for anyone who loves IIS, but I should not be required to *manually
|
||||
search through a dump of Windows system calls* to determine that the
|
||||
reason for IIS silently failing is that I gave a 64-bit pool a
|
||||
32-bit DLL, or that said DLL has unmet dependencies. Whether it's
|
||||
Axis2/C's fault or IIS's fault that the ISAPI DLL managed to either
|
||||
take IIS down or leave it an an indeterminate state no less than a
|
||||
hundred times doesn't much matter to me. *(However, on the upside, I
|
||||
did learn that
|
||||
[Process Monitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx)
|
||||
from Sysinternals can be very useful in cases where you have
|
||||
otherwise no real source of diagnostic information. This is not the
|
||||
first time I had to dump system calls to diagnose an Axis2/C
|
||||
problem.)*
|
||||
- **Poor performance:** Even the examples provided in the Axis2/C
|
||||
source code itself had a tendency to fail to work properly.
|
||||
- Their MTOM transfer example failed to work at all with Microsoft
|
||||
IIS and had horrid performance with Apache HTTPD.
|
||||
- On top of this, the default configuration of Apache Axis2/C
|
||||
opens up a new TCP connection for every single request that is
|
||||
initiated. Each TCP connection, of course, occupies a port on
|
||||
the client side. On Windows, something like 240 seconds (by
|
||||
default) must pass upon that connection closing before the port
|
||||
may be reused; on Linux, I think it's 30 seconds. There are
|
||||
16384 ports available for this purpose. Practical result of
|
||||
this: *A client with the default configuration of Axis2/C cannot
|
||||
sustain more than 68 requests per second on Windows or 273
|
||||
requests per second on Linux.* If you exceed that rate, it will
|
||||
simply start failing. How did I eventually figure this out? By
|
||||
reading documentation carefully? By looking at an API reference?
|
||||
By looking at comments in the source code? No, *by looking at a
|
||||
packet dump in Wireshark,* which pointed out to me the steadily
|
||||
increasing port numbers and flagged that ports were being reused
|
||||
unexpectedly. I later found out that I needed to compile Axis2/C
|
||||
with libcurl support and then it would use a persistent HTTP
|
||||
connection (and also completely break MTOM support because of
|
||||
that unfixed bug I mentioned). None of this was documented
|
||||
anywhere, unless a cryptic mailing-list message from years ago
|
||||
counts.
|
||||
|
||||
So, I'm sorry, esteemed employees of [WSO2](http://wso2.org/), but to
|
||||
claim that Apache Axis2/C is enterprise ready is a horrid mockery of
|
||||
the term.
|
||||
|
||||
This concluded about 2 weeks of work on the matter. In approximately 6
|
||||
hours (and I'll add that my starting point was knowing nothing about
|
||||
the Java technologies), I had a nearly identical version using Java
|
||||
web services (JAX-WS particularly) that was performing on the order of
|
||||
twice as fast and with none of the issues with memory leaks or
|
||||
stability.
|
||||
|
||||
P.S. Is it unique to Windows-related forums that the pattern of
|
||||
support frequently goes like this?
|
||||
|
||||
- Me: This software is messed up. It's not behaving as it should.
|
||||
- Them: It's not messed up; it works for me. You are just too dumb to
|
||||
use it. Try pressing this button, and it will work.
|
||||
- Me: Okay, I pressed it. It's not working.
|
||||
- Them: Oh. Your software is messed up. You should fix it.
|
||||
126
hugo_blag/content/posts/2011-08-27-isolated-pixel-pushing.md
Normal file
126
hugo_blag/content/posts/2011-08-27-isolated-pixel-pushing.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
title: Isolated-pixel-pushing
|
||||
date: August 27, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- CG
|
||||
- Project
|
||||
- Technobabble
|
||||
---
|
||||
|
||||
After finally deciding to look around for some projects on GitHub, I
|
||||
found a number of very interesting ones in a matter of minutes.
|
||||
|
||||
I found
|
||||
[Fragmentarium](http://syntopia.github.com/Fragmentarium/index.html)
|
||||
first. This program is like something I tried for years and years to
|
||||
write, but just never got around to putting in any real finished
|
||||
form. It can act as a simple testbench for GLSL fragment shaders,
|
||||
which I'd already realized could be used to do exactly what I was
|
||||
doing more slowly in [Processing](http://processing.org/), much more
|
||||
slowly in Python (stuff like
|
||||
[this](http://mershell.deviantart.com/gallery/#/dckzex) if we want to
|
||||
dig up things from 6 years ago), much more clunkily in C and
|
||||
[OpenFrameworks](http://www.openframeworks.cc/), and so on. It took me
|
||||
probably about 30 minutes to put together the code to generate the
|
||||
usual gawdy test algorithm I try when bootstrapping from a new
|
||||
environment:
|
||||
|
||||
[{width=100%}](../images/acidity-standard.png)
|
||||
|
||||
(Yeah, it's gaudy. But when you see it animated, it's amazingly trippy
|
||||
and mesmerizing.)
|
||||
|
||||
The use I'm talking about (and that I've reimplemented a dozen times)
|
||||
was just writing functions that map the 2D plane to some colorspace,
|
||||
often with some spatial continuity. Typically I'll have some other
|
||||
parameters in there that I'll bind to a time variable or some user
|
||||
control to animate things. So far I don't know any particular term
|
||||
that encompasses functions like this, but I know people have used it
|
||||
in different forms for a long while. It's the basis of procedural
|
||||
texturing (as pioneered in
|
||||
[An image synthesizer](http://portal.acm.org/citation.cfm?id=325247)
|
||||
by Ken Perlin) as implemented in countless different forms like
|
||||
[Nvidia Cg](http://developer.nvidia.com/cg-toolkit), GLSL, probably
|
||||
Renderman Shading Language, RTSL, POV-Ray's extensive texturing, and
|
||||
Blender's node texturing system (which I'm sure took after a dozen
|
||||
other similar
|
||||
systems). [Adobe Pixel Bender](http://www.adobe.com/devnet/pixelbender.html),
|
||||
which the Fragmentarium page introduced to me for the first time, does
|
||||
something pretty similar but to different ends. Some systems such as
|
||||
[Vvvv](http://www.vvvv.org/) and
|
||||
[Quartz Composer](http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html)
|
||||
probably permit some similar operations; I don't know for sure.
|
||||
|
||||
The benefits of representing a texture (or whatever image) as an
|
||||
algorithm rather than a raster image are pretty well-known: It's a
|
||||
much smaller representation, it scales pretty well to 3 or more
|
||||
dimensions (particularly with noise functions like Perlin Noise or
|
||||
Simplex Noise), it can have a near-unlimited level of detail, it makes
|
||||
things like seams and antialiasing much less of an issue, it is almost
|
||||
the ideal case for parallel computation and modern graphics hardware
|
||||
has built-in support for it (e.g. GLSL, Cg, to some extent
|
||||
OpenCL). The drawback is that you usually have to find some way to
|
||||
represent this as a function in which each pixel or texel (or voxel?)
|
||||
is computed in isolation of all the others. This might be clumsy, it
|
||||
might be horrendously slow, or it might not have any good
|
||||
representation in this form.
|
||||
|
||||
Also, once it's an algorithm, you can *parametrize it*. If you can
|
||||
make it render near realtime, then animation and realtime user control
|
||||
follow almost for free from this, but even without that, you still
|
||||
have a lot of flexibility when you can change parameters.
|
||||
|
||||
The only thing different (and debatably so) that I'm doing is trying
|
||||
to make compositions with just the functions themselves rather than
|
||||
using them as means to a different end, like video processing effects
|
||||
or texturing in a 3D scene. It also fascinated me to see these same
|
||||
functions animated in realtime.
|
||||
|
||||
However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen)
|
||||
is doing much more interesting things with the program (i.e. rendering
|
||||
3D fractals with distance estimation) than I would ever have
|
||||
considered doing. It makes sense why - his emerged more from the
|
||||
context of fractals and ray tracers on the GPU, like
|
||||
[Amazing Boxplorer](http://sourceforge.net/projects/boxplorer/), and
|
||||
fractals tend to make for very interesting results.
|
||||
|
||||
His [Syntopia Blog](http://blog.hvidtfeldts.net/) has some fascinating
|
||||
material and beautiful renders on it. His posts on
|
||||
[Distance Estimated 3D Fractals](http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/)
|
||||
were particularly fascinating to me - in part because this was the
|
||||
first time I had encountered the technique of distance estimation for
|
||||
rendering a scene. He gave a good introduction with lots of other
|
||||
material to refer to.
|
||||
|
||||
Distance Estimation blows my mind a little when I try to understand
|
||||
it. I have a decent high-level understanding of ray tracing, but this
|
||||
is not ray tracing, it's ray marching. It lets complexity be emergent
|
||||
rather than needing an explicit representation as a scanline renderer
|
||||
or ray tracer might require (while ray tracers will gladly take a
|
||||
functional representation of many geometric primitives, I have
|
||||
encountered very few cases where something like a complex fractal or
|
||||
an isosurface could be rendered without first approximating it as a
|
||||
mesh or some other shape, sometimes at great cost). Part 1 of Mikael's
|
||||
series on Distance Estimated 3D Fractals links to
|
||||
[these slides](http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf)
|
||||
which show a 4K demo built piece-by-piece using distance estimation to
|
||||
render a pretty complex scene.
|
||||
|
||||
*(Later addition:
|
||||
[This link](http://www.mazapan.se/news/2010/07/15/gpu-ray-marching-with-distance-fields/)
|
||||
covers ray marching for some less fractalian uses. "Hypertexture" by
|
||||
Ken Perlin gives some useful information too, more technical in
|
||||
nature; finding this paper is up to you. Consult your favorite
|
||||
university?)*
|
||||
|
||||
He has another rather different program called
|
||||
[Structure Synth](http://structuresynth.sourceforge.net/) which he
|
||||
made following the same "design grammar" approach of
|
||||
[Context Free](http://www.contextfreeart.org/). I haven't used
|
||||
Structure Synth yet, because Context Free was also new to me and I was
|
||||
first spending some time learning to use that. I'll cover this in
|
||||
another post.
|
||||
|
||||
*(Even later note: With [Shadertoy](https://www.shadertoy.com/) some
|
||||
folks have implemented the same in WebGL.)*
|
||||
162
hugo_blag/content/posts/2011-08-29-context-free.md
Normal file
162
hugo_blag/content/posts/2011-08-29-context-free.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
title: Context Free
|
||||
date: August 29, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- CG
|
||||
- Project
|
||||
- Technobabble
|
||||
---
|
||||
|
||||
My [last post](./2011-08-27-isolated-pixel-pushing.html) mentioned a
|
||||
program called [Context Free](http://www.contextfreeart.org/) that I
|
||||
came across via the [Syntopia](http://blog.hvidtfeldts.net/) blog as
|
||||
his program [Structure Synth](http://structuresynth.sourceforge.net/)
|
||||
was modeled after it.
|
||||
|
||||
I've heard of
|
||||
[context-free grammars](http://en.wikipedia.org/wiki/Context-free_grammar)
|
||||
before but my understanding of them is pretty vague. This program is
|
||||
based around them and the documentation expresses their
|
||||
[limitations](http://www.contextfreeart.org/mediawiki/index.php/Context_Free_cans_and_cannots);
|
||||
what I grasped from this is that no entity can have any "awareness" of
|
||||
the context in which it's drawn, i.e. any part of the rest of the
|
||||
scene or even where in the scene it is. A perusal of the site's
|
||||
[gallery](http://www.contextfreeart.org/gallery/) shows how much those
|
||||
limitations don't really matter.
|
||||
|
||||
I downloaded the program, started it, and their welcome image (with
|
||||
the relatively short source code right beside it) greeted me, rendered
|
||||
on-the-spot:
|
||||
|
||||
[{width=100%}](../images/welcome.png)
|
||||
|
||||
The program was very easy to work with. Their quick reference card was
|
||||
terse but only needed a handful of examples and a few pages of
|
||||
documentation to fill in the gaps. After about 15 minutes, I'd put
|
||||
together this:
|
||||
|
||||
[{width=100%}](../images/spiral-first-20110823.png)
|
||||
|
||||
Sure, it's mathematical and simple, but I think being able to put it
|
||||
together in 15 minutes in a general program (i.e. not a silly ad-hoc
|
||||
program) that I didn't know how to use shows its potential pretty
|
||||
well. The source is this:
|
||||
|
||||
```bash
|
||||
startshape MAIN
|
||||
background { b -1 }
|
||||
rule MAIN {
|
||||
TRAIL { }
|
||||
}
|
||||
rule TRAIL {
|
||||
20 * { r 11 a -0.6 s 0.8 } COLORED { }
|
||||
}
|
||||
rule COLORED {
|
||||
BASE { b 0.75 sat 0.1 }
|
||||
}
|
||||
rule BASE {
|
||||
SQUARE1 { }
|
||||
SQUARE1 { r 90 }
|
||||
SQUARE1 { r 180 }
|
||||
SQUARE1 { r 270 }
|
||||
}
|
||||
rule SQUARE1 {
|
||||
SQUARE { }
|
||||
SQUARE1 { h 2 sat 0.3 x 0.93 y 0.93 r 10 s 0.93 }
|
||||
}
|
||||
```
|
||||
|
||||
I worked with it some more the next day and had some things like this:
|
||||
|
||||
[{width=100%}](../images/tree3-abg.png)
|
||||
|
||||
[{width=100%}](../images/tree4-lul.png)
|
||||
|
||||
I'm not sure what it is. It looks sort of like a tree made of
|
||||
lightning. Some Hive13 people said it looks like a lockpick from
|
||||
hell. The source is some variant of this:
|
||||
|
||||
```bash
|
||||
startshape MAIN
|
||||
background { b -1 }
|
||||
rule MAIN {
|
||||
BRANCH { r 180 }
|
||||
}
|
||||
rule BRANCH 0.25 {
|
||||
box { }
|
||||
BRANCH { y -1 s 0.9 }
|
||||
}
|
||||
rule BRANCH 0.25{
|
||||
box { }
|
||||
BRANCH { y -1 s 0.3 }
|
||||
BRANCH { y -1 s 0.7 r 52 }
|
||||
}
|
||||
rule BRANCH 0.25 {
|
||||
box { }
|
||||
BRANCH { y -1 s 0.3 }
|
||||
BRANCH { y -1 s 0.7 r -55 }
|
||||
}
|
||||
path box {
|
||||
LINEREL{x 0 y -1}
|
||||
STROKE{p roundcap b 1 }
|
||||
}
|
||||
```
|
||||
|
||||
The program is very elegant in its simplicity. At the same time, it's
|
||||
a really powerful program. Translating something written in Context
|
||||
Free into another programming language would in most cases not be
|
||||
difficult at all - you need just a handful of 2D drawing primitives, a
|
||||
couple basic operations for color space and geometry, the ability to
|
||||
recurse (and to stop recursing when it's pointless). But that
|
||||
representation, though it might be capable of a lot of things that
|
||||
Context Free can't do on its own, probably would be a lot clumsier.
|
||||
|
||||
This is basically what some of my OpenFrameworks sketches were doing
|
||||
in a much less disciplined way (although with the benefit of animation
|
||||
and GPU-accelerated primitives) but I didn't realize that what I was
|
||||
doing could be expressed so easily and so compactly in a context-free
|
||||
grammar.
|
||||
|
||||
It's appealing, though, in the same way as the functions discussed in
|
||||
the last post (i.e. those for procedural texturing). It's a similarly
|
||||
compact representation of an image - this time, a vector image rather
|
||||
than a spatially continuous image, which has some benefits of its
|
||||
own. It's an algorithm - so now it can be parametrized. (Want to see
|
||||
one reason why parametrized vector things are awesome? Look at
|
||||
[Magic Box](http://magic-box.org/).) And once it's parametrized,
|
||||
animation and realtime user control are not far away, provided you can
|
||||
render quickly enough.
|
||||
|
||||
*(And as
|
||||
[\@codersandy](http://twitter.com/#!/codersandy/statuses/108180159194079232)
|
||||
observed after reading this, [POV-Ray](http://www.povray.org/) is in
|
||||
much the same category too. I'm not sure if he meant it in the same
|
||||
way I do, but POV-Ray is a fully Turing-complete language and it
|
||||
permits you to generate your whole scene procedurally if you wish,
|
||||
which is great - but Context Free is indeed far simpler than this,
|
||||
besides only being 2D. It will be interesting to see how Structure
|
||||
Synth compares, given that it generates 3D scenes and has a built-in
|
||||
raytracer.)*
|
||||
|
||||
My next step is probably to play around with
|
||||
[Structure Synth](http://structuresynth.sourceforge.net/) (and like
|
||||
Fragmentarium it's built with Qt, a library I actually am familiar
|
||||
with). I also might try to create a JavaScript implementation of
|
||||
Context Free and conquer my total ignorance of all things
|
||||
JavaScript. Perhaps a realtime OpenFrameworks version is in the works
|
||||
too, considering this is a wheel I already tried to reinvent once (and
|
||||
badly) in OpenFrameworks.
|
||||
|
||||
Also in the queue to look at:
|
||||
|
||||
* [NodeBox](http://nodebox.net/code/index.php/Home), "a Mac OS X
|
||||
application that lets you create 2D visuals (static, animated or
|
||||
interactive) using Python programming code..."
|
||||
* [jsfiddle](http://jsfiddle.net/), a sort of JavaScript/HTML/CSS
|
||||
sandbox for testing. (anarkavre showed me a neat sketch he put
|
||||
together [here](http://jsfiddle.net/anarkavre/qVVuD/))
|
||||
* [Paper.js](http://paperjs.org/), "an open source vector graphics
|
||||
scripting framework that runs on top of the HTML5 Canvas."
|
||||
* Reading [generative art](http://www.manning.com/pearson/) by Matt
|
||||
Pearson which I just picked up on a whim.
|
||||
@@ -0,0 +1,286 @@
|
||||
---
|
||||
title: "QMake hackery: Dependencies & external preprocessing"
|
||||
date: November 13, 2011
|
||||
author: Chris Hodapp
|
||||
tags:
|
||||
- Project
|
||||
- Technobabble
|
||||
---
|
||||
* TODO: Put the code here into a Gist?
|
||||
|
||||
[Qt Creator](http://qt-project.org/wiki/Category:Tools::QtCreator) is
|
||||
a favorite IDE of mine for when I have to deal with miserably large
|
||||
C++ projects. At my job I ported a build in Visual Studio of one such
|
||||
large project over to Qt Creator so that builds and development could
|
||||
be done on OS X and Linux, and in the process, learned a good deal
|
||||
about [QMake](http://doc.qt.nokia.com/latest/qmake-manual.html) and
|
||||
how to make it do some unexpected things.
|
||||
|
||||
While I find Qt Creator to be a vastly cleaner, lighter IDE than
|
||||
Visual Studio, and find QMake to be a far more straightforward build
|
||||
system for the majority of things than Visual Studio's build system,
|
||||
some things the build needed were very tricky to set up in QMake. The
|
||||
two main shortcomings I ran into were:
|
||||
|
||||
* Managing dependencies between projects, as building the application
|
||||
in question involved building 40-50 separate subprojects as
|
||||
libraries, many of which depended on each other.
|
||||
* Having external build events, as the application also had to call an
|
||||
external tool (no, not `moc`, this is different) to generate some
|
||||
source files and headers from a series of templates.
|
||||
|
||||
QMake, as it happens, has some commands that actually make the project
|
||||
files Turing-complete, albeit in a rather ugly way. The `eval`
|
||||
command is the main source of this, and I made heavy use of it.
|
||||
|
||||
First is the dependency management system. It's a little large, but I'm including it inline here.
|
||||
|
||||
```bash
|
||||
# This file is meant to be included in from other project files, but it needs
|
||||
# a particular context:
|
||||
# (1) Make sure that the variable TEMPLATE is set to: subdirs, lib, or app.
|
||||
# Your project file really should be doing this anyway.
|
||||
# (2) Set DEPENDS to a list of dependencies that must be linked in.
|
||||
# (3) Set DEPENDS_NOLINK to a list of dependencies from which headers are
|
||||
# needed, but which are not linked in. (Doesn't matter for 'subdirs'
|
||||
# template)
|
||||
# (4) Make sure BASEDIR is set.
|
||||
#
|
||||
# This script may modify SUBDIRS, INCLUDEPATH, and LIBS. It should always add,
|
||||
# not replace.
|
||||
# It will halt execution if BASEDIR or TEMPLATE are not set, or if DEPENDS or
|
||||
# DEPENDS_NOLINK reference something not defined in the table.
|
||||
#
|
||||
# Order does matter in DEPENDS for the "subdirs" template. Items which come
|
||||
# first should satisfy dependencies for items that come later.
|
||||
# You'll often see:
|
||||
# include ($$(BASEDIR)/qmakeDefault.pri)
|
||||
# which includes this file automatically.
|
||||
#
|
||||
# -CMH 2011-06
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Messages and sanity checks
|
||||
# ----------------------------------------------------------------------------
|
||||
message("Included Dependencies.pro!")
|
||||
message("Dependencies: " $$DEPENDS)
|
||||
message("Dependencies (INCLUDEPATH only): " $$DEPENDS_NOLINK)
|
||||
#message("TEMPLATE is: " $$TEMPLATE)
|
||||
|
||||
isEmpty(BASEDIR) {
|
||||
error("BASEDIR variable is empty here. Make sure it is set!")
|
||||
}
|
||||
isEmpty(TEMPLATE) {
|
||||
error("TEMPLATE variable is empty here. Make sure it is set!")
|
||||
}
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Table of project locations
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Some common locations, here only to shorten descriptions in the _PROJ table.
|
||||
_PROJECT1 = $$BASEDIR/SomeProject
|
||||
_PROJECT2 = $$BASEDIR/SomeOtherProject
|
||||
_DEPENDENCY = $$BASEDIR/SomeDependency
|
||||
|
||||
# Table of project file locations
|
||||
# (Include paths are also generated based off of these)
|
||||
_PROJ.FooLib = $$_PROJECT1/Libs/FooLib
|
||||
_PROJ.BarLib = $$_PROJECT1/Libs/BarLib
|
||||
_PROJ.OtherStuff = $$_PROJECT2/Libs/BarLib
|
||||
_PROJ.MoreStuff = $$_PROJECT2/Libs/BarLib
|
||||
_PROJ.ExternalLib = $$BASEDIR/SomeLibrary
|
||||
|
||||
# ----------------------------------------------------------------------------
|
||||
# Iterate over dependencies and update variables, as appropriate for the given
|
||||
# template type
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# _valid is a flag telling whether TEMPLATE has matched anything yet
|
||||
_valid = false
|
||||
|
||||
contains(TEMPLATE, "subdirs") {
|
||||
for(dependency, DEPENDS) {
|
||||
# Look for an item like: _PROJ.(dependency)
|
||||
|
||||
# Disclaimer: I wrote this and it works. I have no idea why precisely
|
||||
# why it works. However, I repeat the pattern several times.
|
||||
eval(_dep = $$"_PROJ.$${dependency}")
|
||||
isEmpty(_dep) {
|
||||
error("Unknown dependency " $${dependency} "!")
|
||||
}
|
||||
|
||||
# If that looks okay, then update SUBDIRS.
|
||||
eval(SUBDIRS += $$"_PROJ.$${dependency}")
|
||||
}
|
||||
message("Setting SUBDIRS=" $$SUBDIRS)
|
||||
_valid = true
|
||||
}
|
||||
|
||||
contains(TEMPLATE, "app") | contains(TEMPLATE, "lib") {
|
||||
# Loop over every dependency listed in DEPENDS.
|
||||
for(dependency, DEPENDS) {
|
||||
# Look for an item like: _PROJ.(dependency)
|
||||
eval(_dep = $$"_PROJ.$${dependency}")
|
||||
isEmpty(_dep) {
|
||||
error("Unknown dependency " $${dependency} "!")
|
||||
}
|
||||
|
||||
# If that looks okay, then update both INCLUDEPATH and LIBS.
|
||||
eval(INCLUDEPATH += $$"_PROJ.$${dependency}"/include)
|
||||
eval(LIBS += -l$${dependency}$${LIBSUFFIX})
|
||||
}
|
||||
for(dependency, DEPENDS_NOLINK) {
|
||||
# Look for an item like: _PROJ.(dependency)
|
||||
eval(_dep = $$"_PROJ.$${dependency}")
|
||||
isEmpty(_dep) {
|
||||
error("Unknown dependency " $${dependency} "!")
|
||||
}
|
||||
|
||||
# If that looks okay, then update INCLUDEPATH.
|
||||
eval(INCLUDEPATH += $$"_PROJ.$${dependency}"/include)
|
||||
}
|
||||
#message("Setting INCLUDEPATH=" $$INCLUDEPATH)
|
||||
#message("Setting LIBS=" $$LIBS)
|
||||
_valid = true
|
||||
}
|
||||
|
||||
# If no template type has matched, throw an error.
|
||||
contains(_valid, "false") {
|
||||
error("Don't recognize template type: " $${TEMPLATE})
|
||||
}
|
||||
```
|
||||
|
||||
It's been sanitized heavily to remove all sorts of details from the
|
||||
huge project it was taken from. Mostly, you need to add your dependent
|
||||
projects into the "Table of Project Locations" section, and perhaps
|
||||
make another file that set up the necessary variables mentioned at the
|
||||
top. Then set the `DEPENDS` variable to a list of project names, and
|
||||
then include this QMake file from all of your individual projects (it
|
||||
may be necessary to include it pretty close to the top of the file).
|
||||
|
||||
In general, in this large application, each sub-project had two
|
||||
project files:
|
||||
|
||||
* One with `TEMPLATE = lib` (a few were `app` instead as well). This
|
||||
is the project file that is included in as a dependency from any
|
||||
project that has `TEMPLATE = subdirs`, and this project file makes
|
||||
use of the QMake monstrosity above to set up the include and library
|
||||
paths for any dependencies.
|
||||
* One with `TEMPLATE = subdirs`. The same QMake monstrosity is used
|
||||
here to include in the project files (of the sort in #1) of
|
||||
dependencies so that they are built in the first place, and permit
|
||||
you to build the sub-project standalone if needed.
|
||||
|
||||
...and both are needed if you want to be able to build sub-project
|
||||
independently and without making to take care of dependencies
|
||||
individually.
|
||||
|
||||
The next project down below sort of shows the use of that QMake
|
||||
monstrosity above, though in a semi-useless sanitized form. Its
|
||||
purpose is to show another system, but I'll explain that below it.
|
||||
|
||||
```bash
|
||||
QT -= gui
|
||||
QT -= core
|
||||
TEMPLATE = lib
|
||||
|
||||
## Include our qmake defaults
|
||||
DEPENDS = FooLib BarLib
|
||||
include ($$(BASEDIR)/qmakeDefault.pri)
|
||||
|
||||
TARGET = Project$${LIBSUFFIX}
|
||||
LIBS += -llua5.1 -lrt -lLua$${LIBSUFFIX}
|
||||
DEFINES += PROJECT_EXPORTS
|
||||
|
||||
INCLUDEPATH += /usr/include/lua5.1
|
||||
./include
|
||||
|
||||
HEADERS += include/SomeHeader.h
|
||||
include/SomeOtherHeader.h
|
||||
|
||||
SOURCES += source/SomeClass.cpp
|
||||
source/SomeOtherClass.cpp
|
||||
|
||||
# The rest of this is done with custom build steps:
|
||||
GENERATOR_INPUTS = templates/TemplateFile.ext
|
||||
templates/OtherTemplate.ext
|
||||
|
||||
gen.input = GENERATOR_INPUTS
|
||||
gen.commands = $${DESTDIR}/generator -i $${QMAKE_FILE_IN}
|
||||
# -s source$(InputName).cpp -h include$(InputName).h
|
||||
|
||||
# Set the destination of the source and header files.
|
||||
SOURCE_DIR = "source/"
|
||||
HEADER_DIR = "include/"
|
||||
# What prefix and suffix to replace with paths and .h.cpp, respectively.
|
||||
TEMPLATE_PREFIX = "external/"
|
||||
TEMPLATE_EXTN = ".ext"
|
||||
|
||||
#
|
||||
# Warning: Here be black magic.
|
||||
#
|
||||
# We need to use QMAKE_EXTRA_COMPILERS but its functionality does not give us
|
||||
# an easy way to explicitly specify the names of multiple output files with a
|
||||
# single QMAKE_EXTRA_COMPILERS entry. So, we get around this by making one
|
||||
# entry for each input template (the .ext files).
|
||||
# The part where this gets tricky is that each entry requires a unique
|
||||
# variable name, so we must create these variables dynamically, which would
|
||||
# be impossible in QMake ordinarily since it does only a single eval pass.
|
||||
# Luckily, QMake has an eval(...) command which explicitly performs an eval
|
||||
# pass on a string. We repeatedly use constructs like this:
|
||||
# $$CONTENTS = "Some string data"
|
||||
# $$VARNAME = "STRING"
|
||||
# eval($$VARNAME = $$CONTENTS)
|
||||
# These let us dynamically define variables. For sanity, I've tried to use a
|
||||
# suffix of _VARNAME on any variable which contains the name of another
|
||||
# variable.
|
||||
#
|
||||
|
||||
# Iterate over every filename in GENERATOR_INPUTS
|
||||
for(templatefile, GENERATOR_INPUTS) {
|
||||
# Generate the name of the header file.
|
||||
H1 = $$replace(templatefile, $$TEMPLATE_PREFIX, $$HEADER_DIR)
|
||||
HEADER = $$replace(H1, $$TEMPLATE_EXTN, ".h")
|
||||
# Generate the name of the source file.
|
||||
S1 = $$replace(templatefile, $TEMPLATE_PREFIX, $$SOURCE_DIR)
|
||||
SOURCE = $$replace(S1, $$TEMPLATE_EXTN, ".cpp")
|
||||
# Generate unique variable name to populate & pass to QMAKE_EXTRA_COMPILERS
|
||||
QEC_VARNAME = $$replace(templatefile, ".", "")
|
||||
QEC_VARNAME = $$replace(QEC_VARNAME, "/", "")
|
||||
VARNAME = $$replace(QEC_VARNAME, "\", "")
|
||||
# Append _INPUT to generate another variable name for the input filename
|
||||
INPUT_VARNAME = $${QEC_VARNAME}_INPUT
|
||||
eval($${INPUT_VARNAME} = $$templatefile)
|
||||
|
||||
# Now generate an entry to pass to QMAKE_EXTRA_COMPILERS.
|
||||
eval($${VARNAME}.commands = $${DESTDIR}/generator -i ${QMAKE_FILE_IN} -s ${QMAKE_FILE_OUT} -h $${HEADER})
|
||||
eval($${VARNAME}.name = $$VARNAME)
|
||||
# ACHTUNG! The 'input' field is the _variable name_ which contains the
|
||||
# input filename, not the filename itself. If you put in a filename or
|
||||
# either of those variables don't exist, this will fail, silently, and
|
||||
# all attempts at diagnosis will lead you nowhere.
|
||||
eval($${VARNAME}.input = $${INPUT_VARNAME})
|
||||
eval($${VARNAME}.output = $${SOURCE})
|
||||
eval($${VARNAME}.variable_out = SOURCES)
|
||||
|
||||
# Now tell QMake to actually do this step we meticulously built.
|
||||
eval(QMAKE_EXTRA_COMPILERS += $$VARNAME)
|
||||
# Also add our header files. I doubt it's really necessary, but here it is.
|
||||
HEADERS += $${HEADER}
|
||||
}
|
||||
```
|
||||
|
||||
This one uses a bit more black magic. The entire `GENERATOR_INPUTS`
|
||||
list is a set of files that are inputs to an external program that is
|
||||
called to generate some code, which then must be built with the rest
|
||||
of the project. This uses undocumented QMake features, and a couple
|
||||
kludges to generate some things dynamically (i.e. the filenames of the
|
||||
generated code) from a variable-length list. I highly recommend
|
||||
avoiding it. However, it does work.
|
||||
|
||||
These two links proved indispensable in the creation of this:
|
||||
|
||||
[QMake Variable Reference](http://qt-project.org/doc/qt-4.8/qmake-variable-reference.html)
|
||||
|
||||
[Undocumented qmake](http://www.qtcentre.org/wiki/index.php?title=Undocumented_qmake)
|
||||
361
hugo_blag/content/posts/2011-11-24-obscure-features-of-jpeg.md
Normal file
361
hugo_blag/content/posts/2011-11-24-obscure-features-of-jpeg.md
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
title: Obscure features of JPEG
|
||||
author: Chris Hodapp
|
||||
date: November 24, 2011
|
||||
tags:
|
||||
- Technobabble
|
||||
- jpeg
|
||||
- image_compression
|
||||
---
|
||||
|
||||
*(This is a modified version of what I wrote up at work when I saw
|
||||
that progressive JPEGs could be nearly a drop-in replacement that
|
||||
offered some additional functionality and ran some tests on this.)*
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
The long-established JPEG standard contains a considerable number of
|
||||
features that are seldom-used and sometimes virtually unknown. This
|
||||
all is in spite of the widespread use of JPEG and the fact that every
|
||||
JPEG decoder I tested was compatible with all of the features I will
|
||||
discuss, probably because [IJG libjpeg](http://www.ijg.org/) (or
|
||||
[this](http://www.freedesktop.org/wiki/Software/libjpeg)) runs
|
||||
basically everywhere.
|
||||
|
||||
Progressive JPEG
|
||||
================
|
||||
|
||||
One of the better-known features, though still obscure, is that of
|
||||
progressive JPEGs. Progressive JPEGs contain the data in a different
|
||||
order than more standard (sequential) JPEGs, enabling the JPEG decoder
|
||||
to produce a full-sized image from just the beginning portion of a
|
||||
file (at a reduced detail level) and then refine those details as more
|
||||
of the file is available.
|
||||
|
||||
This was originally made for web usage over slow connections. While it
|
||||
is rarely-used, most modern browsers support this incremental display
|
||||
and refinement of the image, and even those applications that do not
|
||||
attempt this support still are able to read the full image.
|
||||
|
||||
Interestingly, since the only real difference between a progressive
|
||||
JPEG and a sequential one is that the coefficients come in a different
|
||||
order, the conversion between progressive and sequential is
|
||||
lossless. Various lossless compression steps are applied to these
|
||||
coefficients and as this reordering may permit a more efficient
|
||||
encoding, a progressive JPEG often is smaller than a sequential JPEG
|
||||
expressing an identical image.
|
||||
|
||||
One command I've used pretty frequently before posting a large photo online is:
|
||||
|
||||
```bash
|
||||
jpegtran -optimize -progressive -copy all input.jpg > output.jpg
|
||||
```
|
||||
|
||||
This losslessly converts *input.jpg* to a progressive version and
|
||||
optimizes it as well. (*jpegtran* can do some other things losslessly
|
||||
as well - flipping, cropping, rotating, transposing, converting to
|
||||
greyscale.)
|
||||
|
||||
Multi-scan JPEG
|
||||
===============
|
||||
|
||||
More obscure still is that progressive JPEG is a particular case of
|
||||
something more general: a **multi-scan JPEG**.
|
||||
|
||||
Standard JPEGs are single-scan sequential: All of the data is stored
|
||||
top-to-bottom, with all of the color components and coefficients
|
||||
together and in full. This includes, per **MCU** (minimum coded unit,
|
||||
an 8x8 pixel square or some small multiple of it), 64 coefficients
|
||||
each for each one of the 3 color components (typically Y,Cb,Cr). The
|
||||
coefficients are from an 8x8 DCT transform matrix, but they are stored
|
||||
in a zigzag order that preserves locality with regard to spatial
|
||||
frequency as this permits more efficient encoding. The first
|
||||
coefficient (0) is referred to as the DC coefficient; the others
|
||||
(1-63) are AC coefficients.
|
||||
|
||||
Multi-scan JPEG permits this information to be packed in a fairly
|
||||
arbitrary way (though with some restrictions). While information is
|
||||
still stored top-to-bottom, it permits for only some of the data in
|
||||
each MCU to be given, with the intention being that later scans will
|
||||
provide other parts of this data (hence the name multi-scan). More
|
||||
specifically:
|
||||
|
||||
* The three color components (Y for grayscale, and Cb/Cr for color) may be split up between scans.
|
||||
* The 64 coefficients in each component may be split up. *(Two
|
||||
restrictions apply here for any given scan: the DC coefficient must
|
||||
always precede the AC coefficients, and if only AC coefficients are
|
||||
sent, then they may only be for one single color component.)*
|
||||
* Some bits of the coefficients may be split up. *(This, too, is
|
||||
subject to a restriction, not to a given scan but to the entire
|
||||
image: You must specify some of the DC bits. AC bits are all
|
||||
optional. Information on how many bits are actually used here is
|
||||
almost nonexistent.)*
|
||||
|
||||
In other words:
|
||||
|
||||
* You may leave color information out to be added later.
|
||||
* You may let spatial detail be only a low-frequency approximation to
|
||||
be refined later with higher-frequency coefficients. (As far as I
|
||||
can tell, you cannot consistently reduce grayscale detail beyond the
|
||||
8x8 pixel MCU while still recovering that detail in later scans.)
|
||||
* You may leave grayscale and color values at a lower precision
|
||||
(i.e. coarsely quantized) to have more precision added later.
|
||||
* You may do all of the above in almost any order and almost any
|
||||
number of steps.
|
||||
|
||||
Your libjpeg distribution probably contains something called
|
||||
**wizard.txt** someplace (say, `/usr/share/docs/libjpeg8a` or
|
||||
`/usr/share/doc/libjpeg-progs`); I don't know if an online copy is
|
||||
readily available, but mine is
|
||||
[here](<../images/obscure_jpeg_features/libjpeg-wizard.txt>). I'll
|
||||
leave detailed explanation of a scan script to the "Multiple Scan /
|
||||
Progression Control" section of this document, but note that:
|
||||
|
||||
* Each non-commented line corresponds to one scan.
|
||||
* The first section, prior to the colon, specifies which plane to
|
||||
send, Y (0), Cb (1), or Cr (2).
|
||||
* The two fields immediately after the colon give the first and last
|
||||
indices of coefficients from that plane that should be in the
|
||||
scan. Those indices are from 0 to 63 in zigzag order; 0 = DC, 1-63 =
|
||||
AC in increasing frequency.
|
||||
* The two fields immediately after those specify which bits of those
|
||||
coefficients this scan contains.
|
||||
|
||||
According to that document, the standard script for a progressive JPEG is this:
|
||||
|
||||
```bash
|
||||
# Initial DC scan for Y,Cb,Cr (lowest bit not sent)
|
||||
0,1,2: 0-0, 0, 1 ;
|
||||
# First AC scan: send first 5 Y AC coefficients, minus 2 lowest bits:
|
||||
0: 1-5, 0, 2 ;
|
||||
# Send all Cr,Cb AC coefficients, minus lowest bit:
|
||||
# (chroma data is usually too small to be worth subdividing further;
|
||||
# but note we send Cr first since eye is least sensitive to Cb)
|
||||
2: 1-63, 0, 1 ;
|
||||
1: 1-63, 0, 1 ;
|
||||
# Send remaining Y AC coefficients, minus 2 lowest bits:
|
||||
0: 6-63, 0, 2 ;
|
||||
# Send next-to-lowest bit of all Y AC coefficients:
|
||||
0: 1-63, 2, 1 ;
|
||||
# At this point we've sent all but the lowest bit of all coefficients.
|
||||
# Send lowest bit of DC coefficients
|
||||
0,1,2: 0-0, 1, 0 ;
|
||||
# Send lowest bit of AC coefficients
|
||||
2: 1-63, 1, 0 ;
|
||||
1: 1-63, 1, 0 ;
|
||||
# Y AC lowest bit scan is last; it's usually the largest scan
|
||||
0: 1-63, 1, 0 ;</pre>
|
||||
```
|
||||
|
||||
And for standard, sequential JPEG it is:
|
||||
|
||||
```bash
|
||||
0 1 2: 0 63 0 0;
|
||||
```
|
||||
|
||||
In
|
||||
[this image](../images/obscure_jpeg_features/20100713-0107-interleave.jpg)
|
||||
I used a custom scan script that sent all of the Y data, then all Cb,
|
||||
then all Cr. Its custom scan script was just this:
|
||||
|
||||
```bash
|
||||
0;
|
||||
1;
|
||||
2;
|
||||
```
|
||||
|
||||
While not every browser may do this right, most browsers will render
|
||||
the greyscale as its comes in, then add color to it one plane at a
|
||||
time. It'll be more obvious over a slower connection; I purposely left
|
||||
the image fairly large so that the transfer would be slower. You'll
|
||||
note as well that the greyscale arrives much more slowly than the
|
||||
color.
|
||||
|
||||
Code & Utilities
|
||||
================
|
||||
|
||||
The **cjpeg** tool from libjpeg will (among other things) create a
|
||||
JPEG using a custom scan script. Combined with ImageMagick, I used a
|
||||
command like:
|
||||
|
||||
```bash
|
||||
convert input.png ppm:- | cjpeg -quality 95 -optimize -scans scan_script > output.jpg
|
||||
```
|
||||
|
||||
Or if the input is already a JPEG, `jpegtran` will do the same
|
||||
thing, losslessly (as it's merely reordering coefficients):
|
||||
|
||||
```bash
|
||||
jpegtran -scans scan_script input.jpg > output.jpg
|
||||
```
|
||||
|
||||
libjpeg has some interesting features as well. Rather than decoding an
|
||||
entire full-resolution JPEG and then scaling it down, for instance (a
|
||||
common use case when generating thumbnails), you may set it up when
|
||||
decoding so that it will simply do the reduction for you while
|
||||
decoding. This takes less time and uses less memory compared with
|
||||
getting the full decompressed version and resampling afterward.
|
||||
|
||||
The C code below (or [here](../images/obscure_jpeg_features) or this
|
||||
[gist](https://gist.github.com/9220146)), based loosely on `example.c`
|
||||
from libjpeg, will split up a multi-scan JPEG into a series of
|
||||
numbered PPM files, each one containing a scan. Look for
|
||||
`cinfo.scale_num` (circa lines 67, 68) to use the fast scaling
|
||||
features mentioned in the last paragraph, and note that the code only
|
||||
processes as much input JPEG as it needs for the next scan. (It needs
|
||||
nothing special to build besides a functioning libjpeg. `gcc -ljpeg -o
|
||||
jpeg_split.o jpeg_split.c` works for me.)
|
||||
|
||||
```c
|
||||
// jpeg_split.c: Write each scan from a multi-scan/progressive JPEG.
|
||||
// This is based loosely on example.c from libjpeg, and should require only
|
||||
// libjpeg as a dependency (e.g. gcc -ljpeg -o jpeg_split.o jpeg_split.c).
|
||||
#include <stdio.h>
|
||||
#include <jerror.h>
|
||||
#include "jpeglib.h"
|
||||
#include <setjmp.h>
|
||||
#include <string.h>
|
||||
|
||||
void read_scan(struct jpeg_decompress_struct * cinfo,
|
||||
JSAMPARRAY buffer,
|
||||
char * base_output);
|
||||
int read_JPEG_file (char * filename, int scanNumber, char * base_output);
|
||||
|
||||
int main(int argc, char **argv) {
|
||||
if (argc < 3) {
|
||||
printf("Usage: %s <Input JPEG> <Output base name>\n", argv[0]);
|
||||
printf("This reads in the progressive/multi-scan JPEG given and writes out each scan\n");
|
||||
printf("to a separate PPM file, named with the scan number.\n");
|
||||
return 1;
|
||||
}
|
||||
|
||||
char * fname = argv[1];
|
||||
char * out_base = argv[2];
|
||||
read_JPEG_file(fname, 1, out_base);
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct error_mgr {
|
||||
struct jpeg_error_mgr pub;
|
||||
jmp_buf setjmp_buffer;
|
||||
};
|
||||
|
||||
METHODDEF(void) error_exit (j_common_ptr cinfo) {
|
||||
struct error_mgr * err = (struct error_mgr *) cinfo->err;
|
||||
(*cinfo->err->output_message) (cinfo);
|
||||
longjmp(err->setjmp_buffer, 1);
|
||||
}
|
||||
|
||||
int read_JPEG_file (char * filename, int scanNumber, char * base_output) {
|
||||
struct jpeg_decompress_struct cinfo;
|
||||
struct error_mgr jerr;
|
||||
FILE * infile; /* source file */
|
||||
JSAMPARRAY buffer; /* Output row buffer */
|
||||
int row_stride; /* physical row width in output buffer */
|
||||
|
||||
if ((infile = fopen(filename, "rb")) == NULL) {
|
||||
fprintf(stderr, "can't open %s\n", filename);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Set up the normal JPEG error routines, then override error_exit.
|
||||
cinfo.err = jpeg_std_error(&jerr.pub);
|
||||
jerr.pub.error_exit = error_exit;
|
||||
// Establish the setjmp return context for error_exit to use:
|
||||
if (setjmp(jerr.setjmp_buffer)) {
|
||||
jpeg_destroy_decompress(&cinfo);
|
||||
fclose(infile);
|
||||
return 0;
|
||||
}
|
||||
jpeg_create_decompress(&cinfo);
|
||||
jpeg_stdio_src(&cinfo, infile);
|
||||
(void) jpeg_read_header(&cinfo, TRUE);
|
||||
|
||||
// Set some decompression parameters
|
||||
|
||||
// Incremental reading requires this flag:
|
||||
cinfo.buffered_image = TRUE;
|
||||
// To perform fast scaling in the output, set these:
|
||||
cinfo.scale_num = 1;
|
||||
cinfo.scale_denom = 1;
|
||||
|
||||
// Decompression begins...
|
||||
(void) jpeg_start_decompress(&cinfo);
|
||||
|
||||
printf("JPEG is %s-scan\n", jpeg_has_multiple_scans(&cinfo) ? "multi" : "single");
|
||||
printf("Outputting %ix%i\n", cinfo.output_width, cinfo.output_height);
|
||||
|
||||
// row_stride = JSAMPLEs per row in output buffer
|
||||
row_stride = cinfo.output_width * cinfo.output_components;
|
||||
// Make a one-row-high sample array that will go away when done with image
|
||||
buffer = (*cinfo.mem->alloc_sarray)
|
||||
((j_common_ptr) &cinfo, JPOOL_IMAGE, row_stride, 1);
|
||||
|
||||
// Start actually handling image data!
|
||||
while(!jpeg_input_complete(&cinfo)) {
|
||||
read_scan(&cinfo, buffer, base_output);
|
||||
}
|
||||
|
||||
// Clean up.
|
||||
(void) jpeg_finish_decompress(&cinfo);
|
||||
jpeg_destroy_decompress(&cinfo);
|
||||
fclose(infile);
|
||||
|
||||
if (jerr.pub.num_warnings) {
|
||||
printf("libjpeg indicates %i warnings\n", jerr.pub.num_warnings);
|
||||
}
|
||||
}
|
||||
|
||||
void read_scan(struct jpeg_decompress_struct * cinfo,
|
||||
JSAMPARRAY buffer,
|
||||
char * base_output)
|
||||
{
|
||||
char out_name[1024];
|
||||
FILE * outfile = NULL;
|
||||
int scan_num = 0;
|
||||
|
||||
scan_num = cinfo->input_scan_number;
|
||||
jpeg_start_output(cinfo, scan_num);
|
||||
|
||||
// Read up to the next scan.
|
||||
int status;
|
||||
do {
|
||||
status = jpeg_consume_input(cinfo);
|
||||
} while (status != JPEG_REACHED_SOS && status != JPEG_REACHED_EOI);
|
||||
|
||||
// Construct a filename & write PPM image header.
|
||||
snprintf(out_name, 1024, "%s%i.ppm", base_output, scan_num);
|
||||
if ((outfile = fopen(out_name, "wb")) == NULL) {
|
||||
fprintf(stderr, "Can't open %s for writing!\n", out_name);
|
||||
return;
|
||||
}
|
||||
fprintf(outfile, "P6\n%d %d\n255\n", cinfo->output_width, cinfo->output_height);
|
||||
|
||||
// Read each scanline into 'buffer' and write it to the PPM.
|
||||
// (Note that libjpeg updates cinfo->output_scanline automatically)
|
||||
while (cinfo->output_scanline < cinfo->output_height) {
|
||||
jpeg_read_scanlines(cinfo, buffer, 1);
|
||||
fwrite(buffer[0], cinfo->output_components, cinfo->output_width, outfile);
|
||||
}
|
||||
|
||||
jpeg_finish_output(cinfo);
|
||||
fclose(outfile);
|
||||
}
|
||||
```
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
Here are all 10 scans from a standard progressive JPEG, separated out with the example code:
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
67
hugo_blag/content/posts/2012-08-16-some-thoughts.md
Normal file
67
hugo_blag/content/posts/2012-08-16-some-thoughts.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
layout: post
|
||||
title: Thoughts on tools, design, and feedback loops
|
||||
status: publish
|
||||
type: post
|
||||
published: true
|
||||
tags:
|
||||
- rant
|
||||
- Technobabble
|
||||
---
|
||||
|
||||
I just watched [Inventing on Principle](https://vimeo.com/36579366) from Bret Victor and found this entire talk incredibly interesting. Chris Granger's [post](http://www.chris-granger.com/2012/04/12/light-table---a-new-ide-concept/) on Light Table led me to this, and shortly after, I found the redesigned [Khan Academy CS course](http://ejohn.org/blog/introducing-khan-cs) which this inspired. Bret touched on something that basically anyone who's attempted to design anything has implicitly understood: **This feedback loop is the most essential part of the process.**
|
||||
|
||||
I reflected on this and on my own experiences, and decided on a few things:
|
||||
|
||||
**(1) Making that feedback loop fast enough can dramatically change the design process, not just speed it up proportionally.**
|
||||
|
||||
I feel that Bret's video demonstrates this wonderfully. It matches up with something I've believed for awhile: That a slower, more delay-prone process becoming fast enough to be interactive can change the entire way a user relates to it. The change, for me at least, can be as dramatic as between filling out paperwork and having a face-to-face conversation. This metamorphosis is where I see a tool become an extension of the mind.
|
||||
|
||||
[Toplap](http://toplap.org/index.php?title=Main_Page) probably has something to say on this. They link to a \[short\] live coding documentary, [Show Us Your Screens](https://vimeo.com/20241649). I rather like their quote: **"Live coding is not about tools. [Algorithms are thoughts. Chainsaws are tools.](https://vimeo.com/9790850) That's why algorithms are sometimes harder to notice than chainsaws."**
|
||||
|
||||
Live coding perhaps hits many of Bret's points from the angle of musical performance meeting programming. Since he spoke directly of improvisation, I'd say he was well aware of this connection.
|
||||
|
||||
**(2) These dynamic, interactive, high-level tools don't waste computer resources - they trade them.**
|
||||
|
||||
They trade them for being dynamic, interactive, and high-level, and this very often means that they trade ever-increasing computer resources to earn some ever-limited human resources like time, comprehension, and attention.
|
||||
|
||||
I don't look at them as being resource-inefficient. I look at them as being the wrong tool for those situations where I have no spare computer resources to trade. Frankly, those situations are exceedingly rare. (And my degree is in electrical engineering. Most coding I've done when acting as a EE guy, I've done with the implicit assumption that no other type of situation existed.) Even if I eventually have to produce something for such a situation - say, to target a microcontroller - I still have ever-increasing computer resources at my disposal, and I can utilize these to great benefit for some prototyping.
|
||||
|
||||
Limited computer resources restrict an implementation. Limited human resources, like time and attention and comprehension, do the same...
|
||||
|
||||
**(3) The choice of tools defines what ideas are expressible.**
|
||||
|
||||
Any Turing-complete language can express a given algorithm, pretty much by definition. However, since this expression can vary greatly in length and in conciseness, this is really only of theoretical interest if you, a human, have only finite time on earth to make this expression and only so many usable hours per day. (This is close to a point Paul Graham is [quite](http://paulgraham.com/langdes.html) [fond](http://paulgraham.com/power.html) of [making](http://paulgraham.com/avg.html).)
|
||||
|
||||
This same principle goes for all other sorts of expressions and interactions and interfaces, non-Turing-complete included, anytime different tools are capable of producing the same result given enough work. (I can use a text editor to generate music by making PCM samples by hand. I can use a program to make an algorithm to do the same. I can use a program such as Ableton Live to do the same. These all can produce sound, but some of them are a path of insurmountable complexity depending on what sort of sound I want.)
|
||||
|
||||
In a strict way, the choice of tools defines the minimum size of an expression of an idea, and how comprehensible and difficult this expression is. Once this expression hits a certain level of complexity, a couple paths emerge: it may as well be impossible to implement, or it may cease to be about the idea and instead be an implementation of a set of ad-hoc tools to eventually implement that idea. ([Greenspun's tenth rule](https://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule), dated as it is, indicates plenty of other people have observed this.)
|
||||
|
||||
In a less strict way, the choice of tools also guides how a person expresses an idea; not like a fence, but more like a wind. It guides how that person thinks.
|
||||
|
||||
The boundaries that restrict **time** and **effort** also draw the lines that divide ideas into **possible** and **impossible**. Tools can move those lines. The right tools solve the irrelevant problems, and guide the user into solving relevant problems instead.
|
||||
|
||||
Of course, finding the relevant problems can be tricky...
|
||||
|
||||
**(4) When exploring, you are going to re-implement ideas. Get over it.**
|
||||
|
||||
(I suppose [Mythical Man Month](http://c2.com/cgi/wiki?PlanToThrowOneAway) laid claim to something similar decades ago.)
|
||||
|
||||
Turning an idea plus a bad implementation into a good implementation, on the whole, is far easier than turning just an idea into any implementation (and pages upon pages of design documentation rarely push it past 'just an idea'). It's not an excuse to willingly make bad design decisions - it's an acknowledgement that a tangible form of an idea does far more to clarify and refine those design decisions than any amounts of verbal descriptions and diagrams and discussions. Even if that prototype is scrapped in its entirety, the insight and experiences it gives is not.
|
||||
|
||||
The flip side of this is: **Ideas are fluid, and this is good**. Combined with the second point, it's more along the lines of: **Ideas are fluid, provided they already have something to flow from.**
|
||||
|
||||
A high-level expression with the right set of primitives is a description that translates very readily to other forms. The key here is not what language or tool it is, but that it supports the right vocabulary to express the implementation concisely. **Supports** doesn't mean that it has all the needed high-level constructs - just that it is sufficiently flexible and concise to build them readily. (If you 'hide' higher-level structure inside lower-level details, you've added extra complexity. If you abuse higher-level constructs that hide simpler relationships, you've done the same. More on that in another post...)
|
||||
|
||||
My beloved C language, for instance, gives some freedom to build a lot of constructs, but mainly those constructs that still map closely to assembly language and to hardware. C++ tries a little harder, but I feel like those constructs quickly hit the point of appalling, fragile ugliness. Languages like Lisp, Scheme, Clojure, Scala, and probably Haskell (I don't know yet, I haven't attempted to master it) are fairly well unmatched in the flexibility they give you. However, in light of Bret's video, the way these are all meant to be programmed still can fall quite short.
|
||||
|
||||
I love [Context Free](http://www.contextfreeart.org/) as well. I like it because its relative speed combined with some marvelous simplicity gives me the ability to quickly put together complex fractalian/mathematical/algorithmic images. Normal behavior when I work with this program is to generate several hundred images in the course of an hour, refining each one from the last. Another big reason it appeals to me is that, due to its simplicity, I could fairly easily take the Context Free description of any of these images and turn it into some other algorithmic representation (such as a recursive function call to draw some primitives, written in something like [Processing](http://www.processing.org/) or [openFrameworks](http://www.openframeworks.cc/) or HTML5 Canvas or OpenGL).
|
||||
|
||||
*Later note, circa 2017:* Tobbe Gyllebring (@drunkcod)
|
||||
in
|
||||
[The Double Edged Sword of Faster Feedback](https://medium.com/@drunkcod/the-double-edged-sword-of-faster-feedback-1052bf360e7e#.c7o9fsuch) makes
|
||||
some excellent points that I completely missed and that are very
|
||||
relevant to everything here. On the overreliance on fast feedback
|
||||
loops to the exclusion of more deliberate design and analysis, he
|
||||
says, "Running an experiment requires you to have a theory. This is
|
||||
not science. It’s a farce," which I rather like.
|
||||
61
hugo_blag/content/posts/2014-02-06-hello-world.md
Normal file
61
hugo_blag/content/posts/2014-02-06-hello-world.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
title: Hello, World (from Jekyll) (then from Hakyll)
|
||||
author: Chris Hodapp
|
||||
date: June 4, 2016
|
||||
---
|
||||
|
||||
I started this post in February 2014. Actually, I might have started
|
||||
it in July 2013 (while sitting in a Bruegger's Bagels on the same day
|
||||
that I met up with two people from Urbanalta in what would later
|
||||
become my full-time job, to be precise). I really don't remember.
|
||||
|
||||
Here goes another migration of my sparse content from the past 8
|
||||
(er... 10) years. This time, I'm giving up my Wordpress instance that
|
||||
I've migrated around 3 or 4 times (from wordpress.com, then Dreamhost,
|
||||
then Linode, then tortois.es), and completely failed to migrate this
|
||||
time (I neglected to back up Wordpress' MySQL tables). I still have
|
||||
an old XML backup, but it's such a crufty mess at this point that I'd
|
||||
rather start fresh and import in some old content.
|
||||
|
||||
Wordpress is a fine platform and it produces some beautiful results.
|
||||
However, I feel like it is very heavy and complex for what I need, and
|
||||
I have gotten got myself into many train-wrecks and rabbit-holes
|
||||
trying to manage aspects of its layout and behavior and media
|
||||
handling.
|
||||
|
||||
My nose is already buried in Emacs for most else that I write. It's
|
||||
the editor I work most quickly in. I'm already somewhat familiar with
|
||||
git. So, I am giving [Jekyll](http://jekyllrb.com/) a try *(later
|
||||
note: now using [Hakyll](https://jaspervdj.be/hakyll/) instead.)*.
|
||||
Having a static site pre-generated from Markdown just seems like it
|
||||
would fit my workflow better, and not require me to switch to a
|
||||
web-based editor. I'm going to have to learn some HTML and CSS
|
||||
anyway.
|
||||
|
||||
(I phrase this as if it were a brilliant flash of insight on my part.
|
||||
No, it's something I started in July and then procrastinated on until
|
||||
now, when my Wordpress has been down for months.)
|
||||
|
||||
*(And then procrastinated another 2 years for good measure.)*
|
||||
|
||||
A vaguely relevant
|
||||
[issue](https://github.com/joyent/smartos-live/issues/275) just
|
||||
steered me to the existence of
|
||||
[TRAMP](https://www.gnu.org/software/tramp/) which allows me to edit
|
||||
remote files in Emacs. I just did *C-x C-f*
|
||||
`/ssh:username@whatever.com:/home/username` from a stock Emacs
|
||||
installation, and now I'm happily editing this Markdown file, which is
|
||||
on my VPS, from my local Emacs. For some reason, I find this
|
||||
incredibly awesome, even though things like remote X, NX, RDP, and
|
||||
sshfs have been around for quite some time now. (When stuff starts
|
||||
screwing up, M-x tramp-clean-up-all-connection seems to help a bit.)
|
||||
|
||||
I collect lots of notes and I enjoy writing and explaining, so why
|
||||
don't I maintain a blog where I actually post more often than once
|
||||
every 18 months? I don't really have a good answer. I just know that
|
||||
this crosses my mind about once a week. But okay, Steve Yegge, you
|
||||
get
|
||||
[your wish](https://sites.google.com/site/steveyegge2/you-should-write-blogs)
|
||||
but only because I found
|
||||
[what you wrote](https://sites.google.com/site/steveyegge2/tour-de-babel#TOC-C-)
|
||||
about C++ to be both funny and appropriate.
|
||||
@@ -0,0 +1,241 @@
|
||||
---
|
||||
title: "Catalogue of My Stupidity: My Haskell 'GenericStruct' Nonsense"
|
||||
author: Chris Hodapp
|
||||
date: June 23, 2015
|
||||
tags:
|
||||
- stupidity
|
||||
- Technobabble
|
||||
---
|
||||
|
||||
*(A note: I took these notes during my time at Urbanalta, intending
|
||||
them to be a private reference to myself on how to learn from some
|
||||
mistakes. I've tried to scrub the proprietary bits out and leave the
|
||||
general things behind. I do reference some other notes that probably
|
||||
will still stay private.)*
|
||||
|
||||
# Background
|
||||
|
||||
Some background on this: This is some notes on a small Haskell module
|
||||
I did at Urbanalta which I called `GenericStruct`. Most of this post
|
||||
is very Haskell-heavy and perhaps more suited to [HaskellEmbedded][]
|
||||
as it's a very niche usage even within Haskell.
|
||||
|
||||
I talk about this much more extensively in my handwritten work notes,
|
||||
circa 2015-05-05 to 2015-05-20, and in a source file
|
||||
`GenericStruct.hs`. Neither of those are online (and trust me that
|
||||
you don't want to try to understand my scratch notes anyway), but a
|
||||
cleaner summary is in the [Appendix](#appendix).
|
||||
|
||||
The short version is that I needed a way to express the format a
|
||||
packed data structure, similar to a C struct in some ways, but without
|
||||
any padding between fields, and more explicit about the exact size in
|
||||
bits of fields. I wanted this format to also be able to carry some
|
||||
documentation with it because it was meant to be able to express data
|
||||
formats for Bluetooth Low Energy, and so I had a need to present this
|
||||
format in a human-readable way and possibly a more general
|
||||
machine-readable way (such as JSON). This was a similar design goal
|
||||
to my Python creation, AnnotatedStruct, from nearly 2 years ago, but
|
||||
here I wanted the benefits of static typing for when accessing these
|
||||
data structures.
|
||||
|
||||
What complicated matters somewhat is that these data structures,
|
||||
rather than being used directly in Haskell to store things, were to be
|
||||
used with [Ivory][] to model the proper C code for reading and
|
||||
writing.
|
||||
|
||||
What I eventually came up with used Haskell records with
|
||||
specially-crafted data types inside them, and then [GHC.Generics][] to
|
||||
iterate over these data types and inject into them some context
|
||||
information (a field accessor in Haskell by itself cannot have any
|
||||
information about 'where' in the record it is, whether in absolute
|
||||
terms or relative to any other field). Context information here meant
|
||||
things like a bit offset, a size, and a type representation.
|
||||
|
||||
This was a little complicated to implement, but overall, not
|
||||
particularly daunting. The [GHC.Generics][] examples included generic
|
||||
serialization which is a very similar problem in many ways, and I
|
||||
followed from this example and a JSON example.
|
||||
|
||||
(Another note from some prior work: Do not attempt to do record access
|
||||
with Base.Data.Data. It can get some meta-information, like the
|
||||
constructor itself, but only in a sufficiently generic way that you
|
||||
may call nothing specific to a record on it.)
|
||||
|
||||
# Problems
|
||||
|
||||
The problem that I ran into fairly quickly (but not quick enough) is
|
||||
that what I had created had no good way to let me nest data formats
|
||||
inside each other. For instance, I had a 16-bit value which I used in
|
||||
several places, and that 16-bit value was treated in many places as 16
|
||||
individual bitfields with each bit representing a specific Boolean
|
||||
parameter unto itself. In other places, treating it as simply a
|
||||
single 16-bit integer was more meaningful - and this was related to it
|
||||
being used in several places, such that operations like copying from
|
||||
one place to another became meaningful.
|
||||
|
||||
I had no good way to express this. I could not define that format in
|
||||
one place, and then put it inside each struct that used it. I thought
|
||||
initially that implementing this would be a matter of just making
|
||||
certain structures recursive, and I was partly right, but I ran into
|
||||
such complication in the type system that I felt like it was not worth
|
||||
it to proceed further.
|
||||
|
||||
What I wrote yesterday when I ran into these serious snags was:
|
||||
|
||||
- At this stage of complexity, I sort of wish I'd opted for
|
||||
[Template Haskell][] instead. It would have absorbed the change
|
||||
much better. [GHC.Generics][] required me to sort of bend the type
|
||||
system. The problem there is that it had only so far to bend, while
|
||||
with Template Haskell the whole Haskell language (more or less)
|
||||
would be at my disposal, not just some slightly-pliable parts of its
|
||||
type system. (Perhaps this is why Ivory does what it does.)
|
||||
- Idris may have handled this better too by virtue of its dependent
|
||||
types, and for similar reasons.
|
||||
- `johnw` (Freenode IRC `#haskell` denizen &
|
||||
[Galois Inc.](https://galois.com/) employee) mentioned a
|
||||
possibly-viable approach based around an applicative expression of
|
||||
data formats and not requiring things like Template Haskell or
|
||||
possibly Generics. (See IRC logs from 2015-06-04; this was via PM.)
|
||||
- Another person mentioned that this sounded like a job for
|
||||
[Lenses][lens], particularly,
|
||||
their
|
||||
[Iso](https://hackage.haskell.org/package/lens/docs/Control-Lens-Iso.html)
|
||||
(isomorphism) type which had different 'projections' of data.
|
||||
|
||||
# What I did right
|
||||
|
||||
I properly implemented a nice structure with GHC.Generics over top of
|
||||
Haskell records, and kept it fairly compact and strictly-typed. I
|
||||
started making use of it right away, and this meant errors would
|
||||
readily show up (generally as type errors at compile time) as I made
|
||||
changes.
|
||||
|
||||
I kept the code clean and well-documented, and this helped me out
|
||||
substantially with writing the code, understanding it, and then
|
||||
understanding that much of it shouldn't have been written.
|
||||
|
||||
I think that overall that it was a good idea for me to treat the data
|
||||
format as a specification that could be turned into C code, into a
|
||||
JSON description, and into (eventually) a human-readable description.
|
||||
|
||||
# What I did wrong, and should have done instead
|
||||
|
||||
Overall: I tried very hard to solve the very unique, very specific
|
||||
problem. This blocked my view of the real, more general problem.
|
||||
Despite active attempts to discern that general problem, I was fixated
|
||||
on specifics. Despite my preachy guideline elsewhere in my notes
|
||||
that, "Your problem is not a unique snowflake - someone else has
|
||||
studied it," I assumed my problem was a relatively unique snowflake.
|
||||
|
||||
When I expressed the problem to other people, a number of them told me
|
||||
that this sounded like a job for the [Lens][lens] library in Haskell.
|
||||
On top of this, I had used Lenses before. While I had not used them
|
||||
enough to know for certain that Lens was the best solution here, I had
|
||||
used them enough to know that they were a likely first place to look.
|
||||
But, I ignored this experience, and I ignored what other people told
|
||||
me.
|
||||
|
||||
Why I ignored this, I suspect, is because I was focusing too much on
|
||||
the specifics of the issue. This led me to believe that this problem
|
||||
was sufficiently unique and different that Lenses were not an approach
|
||||
I should even look at.
|
||||
|
||||
Lenses might not be the proper approach, but I am almost certain that
|
||||
examining them would have helped me.
|
||||
|
||||
I missed something very crucial: That I would need to nest data
|
||||
formats and share definitions between them. This should have been
|
||||
obvious to me: this is a functional language, and composition (which
|
||||
is what this is) is essential to abstraction and reuse.
|
||||
|
||||
I assumed that my solution would have to be tied to Haskell records.
|
||||
This was not a given. Further, I knew of three methods which created
|
||||
similar structures but did not rely on records: Lenses, Ivory structs,
|
||||
and Ivory bitdata. Records were an irrelevancy (even to the specifics
|
||||
I was fixated on) and I tightly coupled my solution to them. Records
|
||||
are not meant to compose, while some other structures are.
|
||||
|
||||
# Short, general summary
|
||||
|
||||
*(i.e. the part where I get really preachy about vague things)*
|
||||
|
||||
- Foresee what else your problem may need to encompass. Perhaps it
|
||||
only looks like a unique problem because you put too much weight on
|
||||
the specifics, and you've missed the ways it resembles existing,
|
||||
well-studied problems - perhaps even ones you are familiar with.
|
||||
|
||||
- Perhaps you haven't missed anything notable. Still, knowing what
|
||||
else it may need to encompass makes for better solutions, and may
|
||||
prevent you from making design decisions early on which
|
||||
fundamentally limit it in ways that matter later.
|
||||
|
||||
- Unsurprisingly, ekmett probably solved your problem already. (Or
|
||||
perhaps acowley did with [Vinyl][], or perhaps [compdata][] solves
|
||||
it...)
|
||||
|
||||
# Appendix {#appendix}
|
||||
|
||||
My aim was to solve a few problems:
|
||||
|
||||
- Outputting a concrete representation of an entire type (for the
|
||||
sake of inserting into JSON specs, for instance),
|
||||
- Creating a correspondence between native Haskell types and Ivory's
|
||||
specific types (which ended up not being so necessary),
|
||||
- Packing and unpacking a struct value to and from memory
|
||||
automatically (via Ivory),
|
||||
- Unpacking and packing individual fields of a struct (also via
|
||||
Ivory),
|
||||
- Doing the above with the benefit of strict, static typing (i.e. not
|
||||
relying on strings to access a field),
|
||||
- Handling all of this with (in Ivory) an in-memory representation
|
||||
with no padding or alignment concerns,
|
||||
- Having a single specification of a type, including human-readable
|
||||
descriptions.
|
||||
|
||||
What I saw as the largest problem is that accessors for Haskell
|
||||
records have no accessible information on which field they access, or
|
||||
where that field is relative to anything else in the data
|
||||
structure. Thus, if I access a field of a record, I can have no
|
||||
information there about 'where' in the record it is unless I put that
|
||||
information there somehow. The pieces of information that I seemed to
|
||||
need in the field were the field's overall index in the record, and
|
||||
the field's overall memory offset.
|
||||
|
||||
A simpler form of generics, Data.Data, allowed me to solve the first
|
||||
problem easily and produce a list of something like (TypeRep, name,
|
||||
size, position). However, I ran into problems when trying to find a
|
||||
way to insert context into the record somehow. The central issue is
|
||||
that Data.Data provides no way to do anything other than generic
|
||||
operations on a field, and those generic operations are fairly
|
||||
limited. I could find no way to make something like a typeclass and
|
||||
use typeclass methods to update those fields.
|
||||
|
||||
GHC.Generics, on the other hand, made this fairly trivial. I could
|
||||
solve the first problem (albeit in a more complicated way), and what I
|
||||
eventually turned the other problems into was the need to take the
|
||||
generic record type itself (in this case, in the form of a Proxy to
|
||||
it), and given certain constraints on it, to create a generic
|
||||
constructor for this type.
|
||||
|
||||
This proved to be fairly easy. Most of GHC.Generics will as readily
|
||||
traverse a Proxy of a type as a value of the type, given some changes
|
||||
which mostly amount to a lot of use of fmap. The 'to' function in
|
||||
GHC.Generics (after I had cleared up some conceptual confusion) simply
|
||||
took the representation and removed the Proxy (or pushed it
|
||||
elsewhere), until it hit a certain innermost point in which one
|
||||
created an abstract representation of the constructor call itself, but
|
||||
this time with the proper data.
|
||||
|
||||
Most of the rest was just modifying the above to allow me to propagate
|
||||
context information such as index and memory offset, and dealing with
|
||||
the confusion of type families (which I ended up needing much less of
|
||||
than I initially thought).
|
||||
|
||||
|
||||
[GHC.Generics]: https://hackage.haskell.org/package/base/docs/GHC-Generics.html
|
||||
[Ivory]: https://hackage.haskell.org/package/ivory
|
||||
[Vinyl]: https://hackage.haskell.org/package/vinyl
|
||||
[compdata]: https://hackage.haskell.org/package/compdata
|
||||
[lens]: https://hackage.haskell.org/package/lens
|
||||
[Template Haskell]: https://wiki.haskell.org/Template_Haskell
|
||||
[HaskellEmbedded]: https://haskellembedded.github.io/
|
||||
12
hugo_blag/content/posts/2016-09-23-ion-crosspost.md
Normal file
12
hugo_blag/content/posts/2016-09-23-ion-crosspost.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
title: Post at HaskellEmbedded - Introducing Ion
|
||||
author: Chris Hodapp
|
||||
date: September 23, 2016
|
||||
tags:
|
||||
- haskell
|
||||
- haskellembedded
|
||||
---
|
||||
|
||||
Just a quick note: I finally released my Ion library (it was long
|
||||
overdue), and wrote a post about it over at
|
||||
[HaskellEmbedded](https://haskellembedded.github.io/posts/2016-09-23-introducing-ion.html).
|
||||
117
hugo_blag/content/posts/2016-09-25-pi-pan-tilt-1.md
Normal file
117
hugo_blag/content/posts/2016-09-25-pi-pan-tilt-1.md
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
title: "Pi pan-tilt for huge images, part 1: introduction"
|
||||
author: Chris Hodapp
|
||||
date: September 23, 2016
|
||||
tags:
|
||||
- photography
|
||||
- electronics
|
||||
- raspberrypi
|
||||
---
|
||||
|
||||
Earlier this year I was turning around ideas in my head - perhaps
|
||||
inspired by Dr. Essa's excellent class,
|
||||
[CS6475: Computational Photography][cs6475] - about the possibility of
|
||||
making an inexpensive, relatively turn-key rig for creating very
|
||||
high-detail photographs, ideally in HDR, and taking advantage of
|
||||
algorithms, automation, and redundancy to work with cheap optics and
|
||||
cheap sensors. What I had in mind had a pretty commonly-seen starting
|
||||
point for making panoramas - something like a telephoto lens mounted
|
||||
on a pan-tilt gimbal, and software behind it responsible for shooting
|
||||
the right pattern of photographs, handling correct exposures,
|
||||
capturing all the data, and stitching it.
|
||||
|
||||
My aim wasn't so much to produce panoramas as it was to produce very
|
||||
high-detail images, of which panoramas are one type. I'd like it to
|
||||
be possible for narrow angles of view too.
|
||||
|
||||
Most of my thoughts landed at the same inevitable view that this would
|
||||
require lots of custom hardware and electronics, and perhaps from
|
||||
there still may need a mobile app to handle all of the heavy
|
||||
computations.
|
||||
|
||||
Interestingly, this whole time I had several Raspberry Pis, an
|
||||
[ArduCam][] board, work history that familiarized me with some of the
|
||||
cheaper M12 & CS mount lenses of the telephoto variety, and access to
|
||||
a [hackerspace][hive13] with laser cutters and CNCs. Eventually, I
|
||||
realized the rather obvious idea that the Pi and ArduCam would
|
||||
probably do exactly what I needed.
|
||||
|
||||
A few other designs (like [this][makezine] and [this][scraptopower])
|
||||
offered some inspiration, and after iterating on a design a few times
|
||||
I eventually had something mostly out of laser-cut plywood, hardware
|
||||
store parts, and [cheap steppers][steppers]. It looks something like
|
||||
this, mounted on a small tripod:
|
||||
|
||||
[{width=100%}](../images/2016-09-25-pi-pan-tilt-1/IMG_20160912_144539.jpg)
|
||||
|
||||
I am able to move the steppers thanks to [Matt's code][raspi-spy] and
|
||||
capture images with [raspistill][]. The arrangement here provides two
|
||||
axes, pitch and yaw (or, pan and tilt). I put together some code to
|
||||
move the steppers in a 2D grid pattern of a certain size and number of
|
||||
points. (Side note: raspistill can
|
||||
[capture 10-bit raw Bayer data][forum-raw-images] with the `--raw`
|
||||
option, which is very nice. I'm not doing this yet, however.)
|
||||
|
||||
Here's a video of it moving in such a pattern (to speed things along,
|
||||
image capture was replaced by a 1/2 second delay at each point):
|
||||
|
||||
<iframe width="560" height="315"
|
||||
src="https://www.youtube.com/embed/jO3SBandiUs" frameborder="0"
|
||||
allowfullscreen></iframe>
|
||||
|
||||
It's still rather rough to use, but it worked well enough that I
|
||||
picked up a [25mm M12 lens][25mm-lens] - still an angle of view of
|
||||
about 10 degrees on this sensor - and set it up in the park for a test
|
||||
run:
|
||||
|
||||
[{width=100%}](../images/2016-09-25-pi-pan-tilt-1/IMG_20160918_160857.jpg)
|
||||
|
||||
(*Later note*: I didn't actually use the 25mm lens on that shot. I
|
||||
used a 4mm (or something) lens that looks pretty much the same, and
|
||||
didn't realize it until later. It's a wonder that Hugin was able to
|
||||
stitch the shots at all.)
|
||||
|
||||
The laptop is mainly there so that I can SSH into the Pi to control
|
||||
things and to use [RPi-Cam-Web-Interface][] to focus the lens. The
|
||||
red cord is just Cat 6 connecting their NICs together; the Pi is
|
||||
running off of battery here. If I had a wireless adapter on hand (or
|
||||
just a Raspberry Pi 3) I could probably have just set up a WiFi
|
||||
hotspot from the Pi and done all this from a phone.
|
||||
|
||||
I collected 40 or 50 images as the stepper moved through the grid.
|
||||
While I fixed the exposure and ISO values with raspistill, I didn't
|
||||
attempt any bracketing for HDR, and I left whitebalance at whatever
|
||||
the camera module felt like doing, which almost certainly varied from
|
||||
picture to picture. Automatic whitebalance won't matter when I start
|
||||
using the raw Bayer data, but for the first attempt at stitching, I
|
||||
used only the JPEGs which already had whitebalance applied.
|
||||
|
||||
I stitched everything in Hugin on my desktop PC. I would like to
|
||||
eventually make stitching possible just on the Raspberry Pi, which
|
||||
isn't *that* farfetched considering that I stitched my first panoramas
|
||||
on a box that wasn't much more powerful than a Pi. I also had to get
|
||||
rid of some of the images because for whatever reason Hugin's
|
||||
optimization was failing when they were present. However, being able
|
||||
to look at Hugin's computed pitch, yaw, and roll values and see
|
||||
everything lining up nicely with the motion of the steppers is a good
|
||||
sign.
|
||||
|
||||
The first results look decent, but fuzzy, as $10 optics are prone to
|
||||
produce:
|
||||
|
||||
[{width=100%}](http://i.imgur.com/zwIJpFn.jpg)
|
||||
|
||||
Follow along to [part 2](./2016-10-04-pi-pan-tilt-2.html).
|
||||
|
||||
[cs6475]: https://www.omscs.gatech.edu/cs-6475-computational-photography
|
||||
[ArduCam]: http://www.arducam.com/camera-modules/raspberrypi-camera/
|
||||
[hive13]: http://hive13.org/
|
||||
[makezine]: http://makezine.com/projects/high-resolution-panorama-photography-rig/
|
||||
[scraptopower]: http://www.scraptopower.co.uk/Raspberry-Pi/raspberry-pi-diy-pan-tilt-plans
|
||||
[steppers]: https://www.amazon.com/Elegoo-28BYJ-48-ULN2003-Stepper-Arduino/dp/B01CP18J4A
|
||||
[raspi-spy]: http://www.raspberrypi-spy.co.uk/2012/07/stepper-motor-control-in-python/
|
||||
[forum-raw-images]: https://www.raspberrypi.org/forums/viewtopic.php?p=357138
|
||||
[raspistill]: https://www.raspberrypi.org/documentation/raspbian/applications/camera.md
|
||||
[RPi-Cam-Web-Interface]: http://elinux.org/RPi-Cam-Web-Interface
|
||||
[25mm-lens]: https://www.amazon.com/gp/product/B00N3ZPTE6
|
||||
[Hugin]: http://wiki.panotools.org/Hugin
|
||||
166
hugo_blag/content/posts/2016-10-04-pi-pan-tilt-2.md
Normal file
166
hugo_blag/content/posts/2016-10-04-pi-pan-tilt-2.md
Normal file
@@ -0,0 +1,166 @@
|
||||
---
|
||||
title: "Pi pan-tilt for huge images, part 2: Hugin & PanoTools integration"
|
||||
author: Chris Hodapp
|
||||
date: October 4, 2016
|
||||
tags:
|
||||
- photography
|
||||
- electronics
|
||||
- raspberrypi
|
||||
---
|
||||
|
||||
In my [last post](./2016-09-25-pi-pan-tilt-1.html) I introduced some
|
||||
of the project I've been working on. This post is a little more
|
||||
technical; if you don't care, and just want to see a 91 megapixel
|
||||
image from inside [Hive13][], skip to the end.
|
||||
|
||||
Those of you who thought a little further on the first post might have
|
||||
seen that I made an apparatus that captures a series of images from
|
||||
fairly precise positions, and then completely discards that position
|
||||
information, hands the images off to [Hugin][] and [PanoTools][], and
|
||||
has them crunch numbers for awhile to calculate *the very same
|
||||
position information* for each image.
|
||||
|
||||
That's a slight oversimplification - they also calculate lens
|
||||
parameters, they calculate other position parameters that I ignore,
|
||||
and the position information will deviate because:
|
||||
|
||||
- Stepper motors can stall, and these steppers may have some
|
||||
hysteresis in the gears.
|
||||
- The pan and tilt axes aren't perfectly perpendicular.
|
||||
- The camera might have a slight tilt or roll to it due to being built
|
||||
that way, due to the sensor being mounted that way, or due to the
|
||||
whole assembly being mounted that way.
|
||||
- The camera's [entrance pupil][] may not lie exactly at the center of
|
||||
the two axes, which will cause rotations to also produce shifts in
|
||||
position that they must account for.
|
||||
([No, it's not the nodal point. No, it's not the principal point.][npp]
|
||||
More on this will follow later. Those shifts in position can also
|
||||
cause parallaxing, which is much more annoying to account for. To
|
||||
get what I mean, close one eye and look at a stationary item in the
|
||||
foreground, and then try to rotate your head without the background
|
||||
moving behind it.)
|
||||
|
||||
That is, the position information we have is subject to inaccuracies,
|
||||
and is not sufficient on its own. However, these tools still do a big
|
||||
numerical optimization, and a starting position that is "close" can
|
||||
help them along, so we may as well use the information.
|
||||
|
||||
Also, these optimizations depend on having enough good data to average
|
||||
out to a good answer. Said data comes from matches between features
|
||||
in overlapping images, say, using something like [SIFT][] and
|
||||
[RANSAC][]. Even if we've left plenty of overlap in the images we've
|
||||
shot, some parts of scenes can simply lack features like corners that
|
||||
work well for this (see chapter 4 of
|
||||
[Computer Vision: Algorithms and Applications][szeliski] if you're
|
||||
really curious). We may end up with images for which optimization
|
||||
can't really improve the estimated position, and here a guess based on
|
||||
where we think the stepper motors were is much better than nothing.
|
||||
|
||||
If we look at the [PTO file format][pto] (which Hugin & PanoTools use)
|
||||
in its i-lines section, it has pitch, yaw, and roll for each image.
|
||||
Pitch and yaw are precisely the axes in which the steppers move the
|
||||
camera (recall the pictures of the rig from the last post); the roll
|
||||
axis is how the camera has been rotated. We need to know the lens's
|
||||
angle of view too, but as with other parameters it's okay to just
|
||||
guess and let the optimization fine-tune it. The nominal focal length
|
||||
probably won't be exact anyhow.
|
||||
|
||||
Helpfully, PanoTools provides tools like [pto_gen][] and [pto_var][],
|
||||
and I use these in my script to generate a basic `.pto` file from the
|
||||
2D grid in which I shot images. All that's needed is to add up the
|
||||
steps taken to reach each shot, convert steps to degrees, which for
|
||||
these steppers means using 360 / 64 / 63.63895 = about 0.0884
|
||||
(according to [this][steps]), and making sure that the positive and
|
||||
negative degrees correspond to the right direction in each axis.
|
||||
|
||||
With no refining, tweaking, or optimization, only the per-image
|
||||
stepper motor positions and my guess at the lens's FOV, here is how
|
||||
this looks in Hugin's fast preview:
|
||||
|
||||
[{width=100%}](../images/2016-10-04-pi-pan-tilt-2/hugin-steppers-only.jpg)
|
||||
|
||||
*(This is a test run that I did inside of [Hive13][], by the way. I
|
||||
used the CS-mount [ArduCam][] and its included lens. Shots were in a
|
||||
14 x 4 grid and about 15 degrees apart. People and objects were
|
||||
moving around inside the space at the time, which may account for some
|
||||
weirdness...)*
|
||||
|
||||
Though it certainly has gaps and seams, it's surprisingly coherent.
|
||||
The curved-lines distortion in Hugin's GUI on the right is due to the
|
||||
[projection][], and perfect optics and perfect positioning information
|
||||
can't correct it. Do you recall learning in school that it's
|
||||
impossible to put the globe of the world into a flat two-dimensional
|
||||
map without distortion? This is exactly the same problem - which is
|
||||
likely why Hugin's GUI shows all the pictures mapped onto a globe on
|
||||
the left. That's another topic completely though...
|
||||
|
||||
Of course, Hugin pretty much automates the process of finding control
|
||||
points, matching them, and then finding optimal positions for each
|
||||
image, so that is what I did next. We can also look at these
|
||||
positions directly in Hugin's GUI. The image below contains two
|
||||
screenshots - on the left, the image positions from the stepper
|
||||
motors, and on the right, the optimized positions that Hugin
|
||||
calculated:
|
||||
|
||||
[{width=100%}](../assets_external/2016-10-04-pi-pan-tilt-2/hugin-comparison.png)
|
||||
|
||||
They sort of match up, though pitch deviates a bit. I believe that's
|
||||
because I shifted the pitch of the entire thing to straighten it out,
|
||||
or perhaps Hugin did this automatically to center it, but I haven't
|
||||
examined this in detail yet. (Helpfully, the same process can be used
|
||||
to [calibrate lenses][hugin_lens] and compute the real focal length at
|
||||
the same time - which can be particularly necessary for cases like
|
||||
this where I'm trying to get the most out of cheap optics and when the
|
||||
Exif tags won't include focal lengths.)
|
||||
|
||||
Result
|
||||
======
|
||||
|
||||
A full-resolution JPEG of the result after automated stitching,
|
||||
exposure fusion, lens correction, and so on, is below in this handy
|
||||
zoomable viewer using [OpenSeadragon][]:
|
||||
|
||||
<div id="openseadragon1" style="width: 100%; height: 600px;"></div>
|
||||
<script src="../js/openseadragon/openseadragon.min.js"></script>
|
||||
<script type="text/javascript">
|
||||
var viewer = OpenSeadragon({
|
||||
id: "openseadragon1",
|
||||
prefixUrl: "../js/openseadragon/images/",
|
||||
tileSources: "../assets_external/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.dzi"
|
||||
});
|
||||
</script>
|
||||
|
||||
It's 91.5 megapixels; if the above viewer doesn't work right, a
|
||||
[full-resolution JPEG](../assets_external/2016-10-04-pi-pan-tilt-2/2016-10-04-hive13.jpg)
|
||||
is available too. The full TIFF image is 500 MB, so understandably, I
|
||||
didn't feel like hosting it, particularly when it's not the prettiest
|
||||
photo or the most technically-perfect one (it's full of lens flare,
|
||||
chromatic aberration, overexposure, sensor noise, and the occasional
|
||||
stitching artifact).
|
||||
|
||||
However, you can look up close and see how well the details came
|
||||
through - which I find quite impressive for cheap optics and a cheap
|
||||
sensor.
|
||||
|
||||
[Part 3](./2016-10-12-pi-pan-tilt-3.html) delves into the image
|
||||
processing workflow.
|
||||
|
||||
[ArduCam]: http://www.arducam.com/camera-modules/raspberrypi-camera/
|
||||
[forum-raw-images]: https://www.raspberrypi.org/forums/viewtopic.php?p=357138
|
||||
[raspistill]: https://www.raspberrypi.org/documentation/raspbian/applications/camera.md
|
||||
[25mm-lens]: https://www.amazon.com/gp/product/B00N3ZPTE6
|
||||
[Hugin]: http://wiki.panotools.org/Hugin
|
||||
[PanoTools]: http://wiki.panotools.org/Main_Page
|
||||
[entrance pupil]: https://en.wikipedia.org/wiki/Entrance_pupil
|
||||
[npp]: http://www.janrik.net/PanoPostings/NoParallaxPoint/TheoryOfTheNoParallaxPoint.pdf
|
||||
[steps]: https://arduino-info.wikispaces.com/SmallSteppers?responseToken=04cbc07820c67b78b09c414cd09efa23f
|
||||
[SIFT]: https://en.wikipedia.org/wiki/Scale-invariant_feature_transform
|
||||
[RANSAC]: https://en.wikipedia.org/wiki/RANSAC
|
||||
[hive13]: http://hive13.org/
|
||||
[projection]: http://wiki.panotools.org/Projections
|
||||
[szeliski]: http://szeliski.org/Book/
|
||||
[pto]: http://hugin.sourceforge.net/docs/manual/PTOptimizer.html
|
||||
[pto_gen]: http://wiki.panotools.org/Pto_gen
|
||||
[pto_var]: http://wiki.panotools.org/Pto_var
|
||||
[hugin_lens]: http://hugin.sourceforge.net/tutorials/calibration/en.shtml
|
||||
[OpenSeadragon]: https://openseadragon.github.io/
|
||||
233
hugo_blag/content/posts/2016-10-12-pi-pan-tilt-3.md
Normal file
233
hugo_blag/content/posts/2016-10-12-pi-pan-tilt-3.md
Normal file
@@ -0,0 +1,233 @@
|
||||
---
|
||||
title: "Pi pan-tilt for huge images, part 3: ArduCam & raw images"
|
||||
author: Chris Hodapp
|
||||
date: October 12, 2016
|
||||
tags:
|
||||
- photography
|
||||
- electronics
|
||||
- raspberrypi
|
||||
---
|
||||
|
||||
This is the third part in this series, continuing on from
|
||||
[part 1][part1] and [part 2][part2]. The last post was about
|
||||
integrating the hardware with Hugin and PanoTools. This one is
|
||||
similarly technical, and without any pretty pictures (really, it has
|
||||
no concern at all for aesthetics), so be forewarned.
|
||||
|
||||
Thus far (aside from my first stitched image) I've been using a raw
|
||||
workflow where possible. That is, all images arrive from the camera
|
||||
in a lossless format, and every intermediate step works in a lossless
|
||||
format. To list out some typical steps in this:
|
||||
|
||||
- Acquire raw images from camera with [raspistill][].
|
||||
- Convert these to (lossless) TIFFs with [dcraw][].
|
||||
- Process these into a composite image with [Hugin][] & [PanoTools][],
|
||||
producing another lossless TIFF file (for low dynamic range) or
|
||||
[OpenEXR][] file (for [high dynamic range][hdr]).
|
||||
- Import into something like [darktable][] for postprocessing.
|
||||
|
||||
I deal mostly with the first two here.
|
||||
|
||||
# Acquiring Images
|
||||
|
||||
I may have mentioned in the first post that I'm using
|
||||
[ArduCam's Raspberry Pi camera][ArduCam]. This board uses a
|
||||
5-megapixel [OmniVision OV5647][ov5647]. (I believe they have
|
||||
[another][arducam_omx219] that uses the 8-megapixel Sony OMX 219, but
|
||||
I haven't gotten my hands on one yet.)
|
||||
|
||||
If you are expecting the quality of sensor even an old DSLR camera
|
||||
provides, this board's tiny, noisy sensor will probably disappoint
|
||||
you. However, if you are accustomed to basically every other camera
|
||||
that is within double the price and interfaces directly with a
|
||||
computer of some kind (USB webcams and the like), I think you'll find
|
||||
it quite impressive:
|
||||
|
||||
- It has versions in three lens mounts: CS, C, and M12. CS-mount and
|
||||
C-mount lenses are plentiful from their existing use in security
|
||||
cameras, generally inexpensive, and generally good enough quality
|
||||
(and for a bit extra, ones are available with
|
||||
electrically-controllable apertures and focus). M12 lenses (or
|
||||
"board lenses") are... plentiful and inexpensive, at least. I'll
|
||||
probably go into more detail on optics in a later post.
|
||||
- 10-bit raw Bayer data straight off the sensor is available (see
|
||||
[raspistill][] and its `--raw` option, or how
|
||||
[picamera][picamera-raw] does it). Thus, we can bypass all of the
|
||||
automatic brightness, sharpness, saturation, contrast, and
|
||||
whitebalance correction which are great for snapshots and video, but
|
||||
really annoying for composite images.
|
||||
- Likewise via [raspistill][], we may directly set the ISO speed and
|
||||
the shutter time in microseconds, bypassing all automatic exposure
|
||||
control.
|
||||
- It has a variety of features pertaining to video, none of which I
|
||||
care about for this application. Go look in [picamera][] for the
|
||||
details.
|
||||
|
||||
I'm mostly using the CS-mount version, which came with a lens that is
|
||||
surprisingly sharp. If anyone knows how to do better for $30 (perhaps
|
||||
with those GoPro knockoffs that are emerging?), please tell me.
|
||||
|
||||
Reading raw images from the Raspberry Pi cameras is a little more
|
||||
convoluted, and I suspect that this is just how the CSI-2 pathway for
|
||||
imaging works on the Raspberry Pi. In short: It produces a JPEG file
|
||||
which contains a normal, lossy image, followed by a binary dump of the
|
||||
raw sensor data, not as metadata, not as JPEG data, just... dumped
|
||||
after the JPEG data. *(Where I refer to "JPEG image" here, I'm
|
||||
referring to actual JPEG-encoded image data, not the binary dump stuck
|
||||
inside something that is coincidentally a JPEG file.)*
|
||||
|
||||
Most of my image captures were with something like:
|
||||
|
||||
raspistill --raw -t 1 -w 640 -h 480 -ss 1000 -ISO 100 -o filename.jpg
|
||||
|
||||
That `-t 1` is to remove the standard 5-second timeout; I'm not sure
|
||||
if I can take it lower. `-w 640 -h 480 -q 75` applies to the JPEG
|
||||
image, while the raw data with `--raw` is always full-resolution; I'm
|
||||
saving only a much-reduced JPEG as a thumbnail of the raw data, rather
|
||||
than wasting the disk space and I/O on larger JPEG data than I'll use.
|
||||
`-ss 1000` is for a 1000 microsecond exposure (thus 1 millisecond),
|
||||
and `-ISO 100` is for ISO 100 speed (the lowest this sensor will do).
|
||||
Note that we may also remove the `-ss` option and instead `-set` to
|
||||
get lines like:
|
||||
|
||||
mmal: Exposure now 10970, analog gain 256/256, digital gain 256/256
|
||||
mmal: AWB R=330/256, B=337/256
|
||||
|
||||
That 10970 is the shutter speed, again in microseconds, according to
|
||||
the camera's metering. Analog and digital gain relate to ISO, but
|
||||
only somewhat indirectly; setting ISO will result in changes to both,
|
||||
and from what I've read, they both equal 1 if the ISO speed is 100.
|
||||
|
||||
I just switched my image captures to use [picamera][] rather than
|
||||
`raspistill`. They both are fairly thin wrappers on top of the
|
||||
hardware; the only real difference is that picamera exposes things via
|
||||
a Python API rather than a commandline tool.
|
||||
|
||||
# Converting Raw Images
|
||||
|
||||
People have already put considerable work into converting these rather
|
||||
strange raw image files into something more sane (as the Raspberry Pi
|
||||
forums document [here][forum1] and [here][forum2]) - like the
|
||||
[color tests][beale] by John Beale, and 6by9's patches to dcraw, some
|
||||
of which have made it into Dave Coffin's official [dcraw][].
|
||||
|
||||
I've had to use 6by9's version of dcraw, which is at
|
||||
<https://github.com/6by9/RPiTest/tree/master/dcraw>. As I understand
|
||||
it, he's trying to get the rest of this included into official dcraw.
|
||||
|
||||
On an older-revision ArduCam board, I ran into problems getting 6by9's
|
||||
dcraw to read the resultant raw images, which I fixed with a
|
||||
[trivial patch][dcraw-pr]. However, that board had other problems, so
|
||||
I'm no longer using it. (TODO: Explain those problems.)
|
||||
|
||||
My conversion step is something like:
|
||||
|
||||
dcraw -T -W *.jpg
|
||||
|
||||
`-T` writes a TIFF and passes through metadata `-W` tells dcraw to
|
||||
leave the brightness alone; I found out the hard way that leaving this
|
||||
out would lead to some images with mangled exposures. From here,
|
||||
dcraw produces a `.tiff` for each `.jpg`. We can, if we wish, use all
|
||||
of that 10-bit range by using `-6` to make a 16-bit TIFF rather than
|
||||
an 8-bit one. In my own tests, though, it makes no difference
|
||||
whatsoever because of the sensor's noisiness.
|
||||
|
||||
We can also rotate the image at this step, but I prefer to instead add
|
||||
this as an initial roll value of -90, 90, or 180 degrees when creating
|
||||
the PTO file. This keeps the lens parameters correct if, for
|
||||
instance, we already have computed a distortion model of a lens.
|
||||
|
||||
To give an example of the little bit of extra headroom that raw images
|
||||
provide, I took 9 example shots of the same scene, ranging from about
|
||||
-1.0 underexposed down to -9.0 underexposed. The first grid is the
|
||||
full-resolution JPEG image of these shots, normalized - in effect,
|
||||
trying to re-expose them properly:
|
||||
|
||||
[{width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_jpg.jpg)
|
||||
|
||||
The below contains the raw sensor data, turned to 8-bit TIFF and then
|
||||
again normalized. It's going to look different than the JPEG due to
|
||||
the lack of whitebalance adjustment, denoising, brightness, contrast,
|
||||
and so on.
|
||||
|
||||
[{width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_8bit.jpg)
|
||||
|
||||
These were done with 16-bit TIFFs rather than 8-bit ones:
|
||||
|
||||
[{width=100%}](../images/2016-10-12-pi-pan-tilt-3/tile_16bit.jpg)
|
||||
|
||||
In theory, the 16-bit ones should be retaining two extra bits of data
|
||||
from the 10-bit sensor data, and thus two extra stops of dynamic
|
||||
range, that the 8-bit image cannot keep. I can't see the slightest
|
||||
difference myself. Perhaps those two bits are below the noise floor;
|
||||
perhaps if I used a brighter scene, it would be more apparent.
|
||||
|
||||
Regardless, starting from raw sensor data rather than the JPEG image
|
||||
gets some additional dynamic range. That's hardly surprising - JPEG
|
||||
isn't really known for its faithful reproduction of darker parts of an
|
||||
image.
|
||||
|
||||
Here's another comparison, this time a 1:1 crop from the center of an
|
||||
image (shot at 40mm with [this lens][12-40mm], whose Amazon price
|
||||
mysteriously is now $146 instead of the $23 I actually paid). Click
|
||||
the preview for a lossless PNG view, as JPEG might eat some of the
|
||||
finer details, or [here][leaves-full] for the full JPEG file
|
||||
(including raw, if you want to look around).
|
||||
|
||||
[{width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test.png)
|
||||
|
||||
The JPEG image seems to have some aggressive denoising that cuts into
|
||||
sharper detail somewhat, as denoising algorithms tends to do. Of
|
||||
course, another option exists too, which is to shoot many images from
|
||||
the same point, and then average them. That's only applicable in a
|
||||
static scene with some sort of rig to hold things in place, which is
|
||||
convenient, since that's what I'm making...
|
||||
|
||||
[{width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/IMG_20161016_141826_small.jpg)
|
||||
|
||||
I used that (messy) test setup to produce the below comparison between
|
||||
a JPEG image, a single raw image, 4 raw images averaged, and 16 raw
|
||||
images averaged. These are again 1:1 crops from the center to show
|
||||
noise and detail.
|
||||
|
||||
[{width=100%}](../assets_external/2016-10-12-pi-pan-tilt-3/penguin_compare.png)
|
||||
|
||||
Click for the lossless version, and take a look around finer details.
|
||||
4X averaging has clearly reduced the noise from the un-averaged raw
|
||||
image, and possibly has done better than the JPEG image in that regard
|
||||
while having clearer details. The 16X definitely has.
|
||||
|
||||
Averaging might get us the full 10 bits of dynamic range by cleaning
|
||||
up the noise. However, if we're able to shoot enough images at
|
||||
exactly the same exposure to average them, then we could also shoot
|
||||
them at different exposures (i.e. [bracketing][]), merge them into an
|
||||
HDR image (or [fuse them][exposure fusion]), and get well outside of
|
||||
that limited dynamic range while still having much of that same
|
||||
averaging effect.
|
||||
|
||||
I'll cover the remaining two steps I noted - Hugin & PanoTools
|
||||
stitching and HDR merging, and postprocessing - in the next post.
|
||||
|
||||
[part1]: ./2016-09-25-pi-pan-tilt-1.html
|
||||
[part2]: ./2016-10-04-pi-pan-tilt-2.html
|
||||
[raspistill]: https://www.raspberrypi.org/documentation/raspbian/applications/camera.md
|
||||
[dcraw]: https://www.cybercom.net/~dcoffin/dcraw/
|
||||
[Hugin]: http://wiki.panotools.org/Hugin
|
||||
[PanoTools]: http://wiki.panotools.org/Main_Page
|
||||
[OpenEXR]: http://www.openexr.com/
|
||||
[hdr]: https://en.wikipedia.org/wiki/High-dynamic-range_imaging
|
||||
[darktable]: http://www.darktable.org/
|
||||
[ArduCam]: http://www.arducam.com/camera-modules/raspberrypi-camera/
|
||||
[ov5647]: http://www.ovt.com/uploads/parts/OV5647.pdf
|
||||
[arducam_omx219]: http://www.arducam.com/8mp-sony-imx219-camera-raspberry-pi/
|
||||
[beale]: http://bealecorner.org/best/RPi/
|
||||
[forum1]: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=44918
|
||||
[forum2]: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=92562
|
||||
[dcraw-6by9]: https://github.com/6by9/RPiTest/tree/master/dcraw
|
||||
[dcraw-pr]: https://github.com/6by9/RPiTest/pull/1
|
||||
[picamera-raw]: https://picamera.readthedocs.io/en/release-1.10/recipes2.html#bayer-data
|
||||
[picamera]: https://www.raspberrypi.org/documentation/usage/camera/python/README.md
|
||||
[12-40mm]: https://www.amazon.com/StarDot-Vari-Focal-Camera-Lens-Black/dp/B00IPR1YSC
|
||||
[leaves-full]: ../assets_external/2016-10-12-pi-pan-tilt-3/leaves_test_full.jpg
|
||||
[exposure fusion]: https://en.wikipedia.org/wiki/Exposure_Fusion
|
||||
[bracketing]: https://en.wikipedia.org/wiki/Bracketing
|
||||
25
hugo_blag/content/posts/2016-12-13-cincyfp-r-crosspost.md
Normal file
25
hugo_blag/content/posts/2016-12-13-cincyfp-r-crosspost.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "CincyFP presentation: R & Feature Transformation"
|
||||
author: Chris Hodapp
|
||||
date: December 13, 2016
|
||||
tags:
|
||||
- r, cincyfp
|
||||
---
|
||||
|
||||
Another cross-post (sort of): The slides and notebooks from my
|
||||
presentation on "R and Feature Learning"
|
||||
to
|
||||
[CincyFP](https://cincyfp.wordpress.com/2016/11/29/december-meeting-5/) are
|
||||
online.
|
||||
|
||||
Presentation slides
|
||||
are
|
||||
[here](../assets_external/2016-12-13-cincyfp-r-crosspost/CincyFP_R_slides.slides.html) and
|
||||
the notebook live-coded in the presentation is
|
||||
at
|
||||
[Live_demo.ipynb](https://github.com/Hodapp87/cincyfp_presentation_R_2016_12/blob/master/Live_demo.ipynb).
|
||||
|
||||
This is on
|
||||
[my GitHub](https://github.com/hodapp87/cincyfp_presentation_R_2016_12) as
|
||||
well as
|
||||
[CincyFP's GitHub](https://github.com/cincy-functional-programmers/cincyfp-presentations/tree/master/2016-12-r-pca).
|
||||
144
hugo_blag/content/posts/2018-03-09-python-asyncio.org
Normal file
144
hugo_blag/content/posts/2018-03-09-python-asyncio.org
Normal file
@@ -0,0 +1,144 @@
|
||||
---
|
||||
title: Some Python asyncio disambiguation
|
||||
author: Chris Hodapp
|
||||
date: March 9, 2018
|
||||
tags:
|
||||
- technobabble
|
||||
---
|
||||
|
||||
# TODO: Generators? Is it accurate that prior to all this, coroutines
|
||||
# were still available, but by themselves they offered no way to
|
||||
# perform anything in the background?
|
||||
|
||||
Recently I needed to work a little more in-depth with Python 3's
|
||||
[[https://docs.python.org/3/library/asyncio.html][asyncio]]. On the one hand, some people (like me) might scoff at this
|
||||
because it's just green threads and cooperative threading is a model
|
||||
that's fresh out of the '90s, and Python /still/ has the [[https://wiki.python.org/moin/GlobalInterpreterLock][GIL]] - and
|
||||
because Elixir, Erlang, Haskell, [[https://github.com/clojure/core.async/][Clojure]] (also [[http://blog.paralleluniverse.co/2013/05/02/quasar-pulsar/][this]]), [[http://docs.paralleluniverse.co/quasar/][Java/Kotlin]], and
|
||||
Go all handle async and M:N threading fine, and have for years. The
|
||||
Python folks have their own set of complaints, like
|
||||
[[http://lucumr.pocoo.org/2016/10/30/i-dont-understand-asyncio/][I don't understand Python's Asyncio]] and
|
||||
[[http://jordanorelli.com/post/31533769172/why-i-went-from-python-to-go-and-not-nodejs][Why I went from Python to Go (and not node.js)]].
|
||||
At least it is in good company [[https://nullprogram.com/blog/2018/05/31/#threads][with Emacs still]].
|
||||
|
||||
On the other hand, it's still a useful enough paradigm that it's in
|
||||
the works for [[https://doc.rust-lang.org/nightly/unstable-book/language-features/generators.html][Rust]] (sort of... it had green threads which were removed
|
||||
in favor of a lighter approach) and broadly the [[http://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.html][JVM]] (sort
|
||||
of... they're trying to do [[https://en.wikipedia.org/wiki/Fiber_(computer_science)][fibers]], not green threads). [[https://github.com/libuv/libuv][libuv]] brings
|
||||
something very similar to various languages, including C, and C
|
||||
already has an asyncio imitator with [[https://github.com/AndreLouisCaron/libgreen][libgreen]]. Speaking of C, did
|
||||
anyone know that GLib has some decent support here via things like
|
||||
[[https://developer.gnome.org/gio/stable/GTask.html][GTask]], [[https://developer.gnome.org/glib/stable/glib-Thread-Pools.html][GThreadPool]], and [[https://developer.gnome.org/glib/stable/glib-Asynchronous-Queues.html][GAsyncQueue]]? I didn't until recently. But I
|
||||
digress...
|
||||
|
||||
asyncio is still preferable to manually writing code in
|
||||
[[https://en.wikipedia.org/wiki/Continuation-passing_style][continuation-passing-style]] (as that's all callbacks are, and last time
|
||||
I had to write that many callbacks, I hated it enough that I [[https://haskellembedded.github.io/posts/2016-09-23-introducing-ion.html][added
|
||||
features to my EDSL]] to avoid it), it's still preferable to a lot of
|
||||
manual arithmetic on timer values to try to schedule things, and it's
|
||||
still preferable to doing blocking I/O all over the place and trying
|
||||
to escape it with other processes. Coroutines are also preferable to
|
||||
yet another object-oriented train-wreck when it comes to handling
|
||||
things like pipelines. While Python's had coroutines for quite awhile
|
||||
now, asyncio perhaps makes them a little more obvious. [[http://www.dabeaz.com/coroutines/Coroutines.pdf][David
|
||||
Beazley's slides]] are excellent for explaining its earlier coroutine
|
||||
support.
|
||||
|
||||
I found the [[https://pymotw.com/3/concurrency.html][Concurrency with Processes, Threads, and Coroutines]]
|
||||
tutorials to be an excellent overview of Python's asyncio, as well as
|
||||
most ways of handling concurrency in Python, and I highly recommend
|
||||
them.
|
||||
|
||||
However, I still had a few stumbling blocks in understanding, and
|
||||
below I give some notes I wrote to check my understanding. I put
|
||||
together a table to try to classify what method to use in different
|
||||
circumstances. As I use it here, calling "now" means turning control
|
||||
over to some other code, whereas calling "whenever" means retaining
|
||||
control but queuing up some code to be run in the background
|
||||
asychronously (as much as possible).
|
||||
|
||||
|-----------+-----------+-----------------------+-----------------------------------------------|
|
||||
| Call from | Call to | When/where | How |
|
||||
|-----------+-----------+-----------------------+-----------------------------------------------|
|
||||
| Either | Function | Now, same thread | Normal function call |
|
||||
| Function | Coroutine | Now, same thread | ~.run_*~ in event loop |
|
||||
| Coroutine | Coroutine | Now, same thread | ~await~ |
|
||||
| Either | Function | Whenever, same thread | Event loop ~.call_*()~ |
|
||||
| Either | Coroutine | Whenever, same thread | Event loop ~.create_task()~ |
|
||||
| | | | ~asyncio.ensure_future()~ |
|
||||
| Either | Function | Now, another thread | ~.run_in_executor()~ on ~ThreadPoolExecutor~ |
|
||||
| Either | Function | Now, another process | ~.run_in_executor()~ on ~ProcessPoolExecutor~ |
|
||||
|-----------+-----------+-----------------------+-----------------------------------------------|
|
||||
|
||||
* Futures & Coroutines
|
||||
|
||||
The documentation was also sometimes vague on the relation between
|
||||
coroutines and futures. My summary on what I figured out is below.
|
||||
|
||||
** Python already had generator-based coroutines.
|
||||
|
||||
Python now has a language feature it refers to as "coroutines" in
|
||||
asyncio (and in calls like ~asyncio.iscoroutine()~, but in Python 2.5
|
||||
it also already supported similar-but-not-entirely-the-same form of
|
||||
coroutine, and even earlier in a limited form via generators. See [[https://www.python.org/dev/peps/pep-0342/][PEP
|
||||
342]] and [[http://www.dabeaz.com/coroutines/Coroutines.pdf][Beazley's slides]].
|
||||
|
||||
** Coroutines and Futures are *mostly* independent.
|
||||
|
||||
It just happens that both allow you to call things asychronously.
|
||||
However, you can use coroutines/asyncio without ever touching a
|
||||
Future. Likewise, you can use a Future without ever touching a
|
||||
coroutine or asyncio. Note that its ~.result()~ call isn't a
|
||||
coroutine.
|
||||
|
||||
** They can still encapsulate each other.
|
||||
|
||||
A coroutine can encapsulate a Future simply by using ~await~ on it.
|
||||
|
||||
A Future can encapsulate a coroutine with [[https://docs.python.org/3/library/asyncio-task.html#asyncio.ensure_future][asyncio.ensure\_future()]] or
|
||||
the event loop's [[https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.create_task][.create\_task()]].
|
||||
|
||||
** Futures can implement asychronicity(?) differently
|
||||
|
||||
The ability to make a Future from a coroutine was mentioned above;
|
||||
that's [[https://docs.python.org/3/library/asyncio-task.html#task][asyncio.Task]], an implementation of [[https://docs.python.org/3/library/asyncio-task.html#future][asyncio.Future]], but it's not
|
||||
the only way to make a Future.
|
||||
|
||||
[[https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Future][concurrent.futures.Future]] provides other mostly-compatible ways. Its
|
||||
[[https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor][ThreadPoolExecutor]] provides Futures based on separate threads, and its
|
||||
[[https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor][ProcessPoolExecutor]] provides Futures based on separate processes.
|
||||
|
||||
** Futures are always paired with some running context.
|
||||
|
||||
That is, a Future is already "started" - running, or scheduled to run,
|
||||
or already ran, or something along those lines, and this is why it has
|
||||
semantics for things like cancellation.
|
||||
|
||||
A coroutine by itself is not. The closest analogue is [[https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Handle][asyncio.Handle]]
|
||||
which is available only when a coroutine has been scheduled to run.
|
||||
|
||||
* Other Event Loops
|
||||
|
||||
[[https://pypi.python.org/pypi/Quamash][Quamash]] implements an asyncio event loop inside of Qt, and I used this
|
||||
on a project. I ran into many issues with this combination. Qt's
|
||||
juggling of multiple event loops seemed to cause many problems here,
|
||||
and I still have some unsolved issues in which calls
|
||||
~run_until_complete~ cause coroutines to die early with an exception
|
||||
because the event loop appears to have died. This came up regularly
|
||||
for me because of how often I would want a Qt slot to queue a task in
|
||||
the background, and it seems this is an acknowledge [[https://github.com/harvimt/quamash/issues/33][issue]].
|
||||
|
||||
There is also [[https://github.com/MagicStack/uvloop\][uvloop]]. I presently have no need for extra performance
|
||||
(nor could I really use it alongside Qt), but it's helpful to know
|
||||
about.
|
||||
|
||||
* Other References
|
||||
|
||||
There are a couple pieces of "official" documentation that can be good
|
||||
references as well:
|
||||
|
||||
- [[https://www.python.org/dev/peps/pep-0492/][PEP 492 - Coroutines with async and await syntax]]
|
||||
- [[https://www.python.org/dev/peps/pep-0525/][PEP 525 - Asynchronous Generators]]
|
||||
- [[https://www.python.org/dev/peps/pep-3156/][PEP 3156 - Asynchronous IO Support Rebooted: the "asyncio" Module]]
|
||||
|
||||
[[https://www.python.org/dev/peps/pep-0342/][PEP 342]] and [[https://www.python.org/dev/peps/pep-0380/][PEP 380]] are relevant too.
|
||||
1783
hugo_blag/content/posts/2018-04-08-recommender-systems-1.md
Normal file
1783
hugo_blag/content/posts/2018-04-08-recommender-systems-1.md
Normal file
File diff suppressed because it is too large
Load Diff
104
hugo_blag/content/posts/2018-04-13-opinions-go.org
Normal file
104
hugo_blag/content/posts/2018-04-13-opinions-go.org
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: "Go programming language: my totally unprompted opinions"
|
||||
author: Chris Hodapp
|
||||
date: April 13, 2018
|
||||
tags:
|
||||
- technobabble
|
||||
- go
|
||||
- golang
|
||||
---
|
||||
|
||||
# TODO: Link to my asyncio post
|
||||
|
||||
After I wrote my post on Python and asyncio, I had the opportunity to
|
||||
work with the [[https://golang.org/][Go]] language for some other projects, and I started
|
||||
jotting down my opinions on it as I did so.
|
||||
|
||||
After using it for a bit, I decided that it's mostly C with
|
||||
concurrency, seamless [[https://www.slideshare.net/matthewrdale/demystifying-the-go-scheduler][M:N threading]], garbage collection, fast
|
||||
compilation, namespaces, multiple return values, packages, a mostly
|
||||
sane build system, no C preprocessor, *minimal* object-oriented
|
||||
support, interfaces, anonymous functions, and closures. Those aren't
|
||||
trivialities; they're all rather great things. They're all missing in
|
||||
C and C++ (for the most part). They're all such common problems that
|
||||
nearly every "practical" C/C++ project uses a lot of ad-hoc solutions
|
||||
sitting both inside and outside the language - libraries, abuse of
|
||||
macros, more extensive code generation, lots of tooling, and a whole
|
||||
lot of "best practices" slavishly followed - to try to solve them.
|
||||
(No, I don't want to hear about how this lack of very basic features
|
||||
is actually a feature. No, I don't want to hear about how
|
||||
painstakingly fucking around with pointers is the hairshirt that we
|
||||
all must wear if we wish for our software to achieve a greater state
|
||||
of piety than is accessible to high-level languages. No, I don't want
|
||||
to hear about how ~$arbitrary_abstraction_level~ is the level that
|
||||
*real* programmers work at, any programmer who works above that level
|
||||
is a loser, and any programmer who works below that level might as
|
||||
well be building toasters. Shut up.)
|
||||
|
||||
I'm a functional programming nerd. I just happen to also have a lot of
|
||||
experience being knee-deep in C and C++ code. I'm looking at Go from
|
||||
two perspectives: compared to C, and compared to any other programming
|
||||
language that might be used to solve similar problems.
|
||||
|
||||
It still has ~goto~. This makes the electrical engineer in me happy.
|
||||
Anyone who tells me I should write in a C-like language without goto
|
||||
can go take a flying leap.
|
||||
|
||||
The concurrency support is excellent when compared to C and even
|
||||
compared to something like Python. The ability to seamlessly
|
||||
transition a block of code in between running sychronously and running
|
||||
asynchronously (by making it into a goroutine) is very helpful, and so
|
||||
is the fact that muxes these goroutines onto system threads more or
|
||||
less transparently.
|
||||
|
||||
Concurrency was made a central aim in this language. If you've not
|
||||
watched Rob Pike's [[https://blog.golang.org/concurrency-is-not-parallelism][Concurrency is not parallelism]] talk, go do it now.
|
||||
While it's perhaps not my favorite approach to concurrency. While I
|
||||
may not be a fan of the style of concurrency that it uses (based on
|
||||
[[https://en.wikipedia.org/wiki/Communicating_sequential_processes][CSP]] rather than the more Erlang-ian message passing), this is still a
|
||||
far superior style to the very popular concurrency paradigm of
|
||||
Concurrency Is Easy, We'll Just Ignore It Now and Duct-Tape the
|
||||
Support On Later. [[http://jordanorelli.com/post/31533769172/why-i-went-from-python-to-go-and-not-nodejs][Why I went from Python to Go (and not node.js)]], in
|
||||
my opinion, is spot-on.
|
||||
|
||||
Many packages are available for it, and from all I've seen, they are
|
||||
sensible packages - not [[https://www.reddit.com/r/programming/comments/4bjss2/an_11_line_npm_package_called_leftpad_with_only/][leftpad]]-style idiocy. I'm sure that if I look
|
||||
more carefully, a lot of packages mostly exist in order to patch over
|
||||
limitations in the language - but so far, I've yet to encounter a
|
||||
single 3rd-party uber-package that is effectively a requirement for
|
||||
doing any "real" work in the language, while the standard libraries
|
||||
don't look excessive either.
|
||||
|
||||
I don't exactly make it a secret that I am [[http://www.smashcompany.com/technology/object-oriented-programming-is-an-expensive-disaster-which-must-end][not]] [[https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53#.7t9nj6geg][a fan]] of
|
||||
[[http://harmful.cat-v.org/software/OO_programming/why_oo_sucks][object-oriented programming]]. I like that Go's support for OOP is
|
||||
rather minimal: it's just interfaces and some syntactic sugar around
|
||||
structs.
|
||||
|
||||
The syntax and typing are very familiar to anyone who has used C, and
|
||||
they seem to make it easy for editors/IDEs to integrate with (likely
|
||||
by design). It all feels very solid.
|
||||
|
||||
However, while [[https://blog.golang.org/defer-panic-and-recover][defer, panic, and recover]] are an improvement over C,
|
||||
I'm less a fan of its oppositions to exceptions as a normal
|
||||
error-handling mechanism. Whatever the case, it was a conscious
|
||||
design decision, not an oversight; see [[https://davidnix.io/post/error-handling-in-go/][Go's Error Handling is Elegant]]
|
||||
and Pike's [[https://blog.golang.org/errors-are-values][Errors Are Values]]. The article [[http://250bpm.com/blog:4][Why should I have written
|
||||
ZeroMQ in C, not C++ (part I)]] also makes some good points on how
|
||||
exceptions can be problematic in systems programming.
|
||||
|
||||
My biggest complaint is that while I tend to prefer strongly-typed,
|
||||
statically-typed languages (and Go is both), I feel like the type
|
||||
system is also still very limited - particularly, things like the lack
|
||||
of any parametric polymorphism. I'd probably prefer something more
|
||||
like in [[https://www.rust-lang.org][Rust]]. I know this was largely intentional as well: Go was
|
||||
designed for people who don't want a more powerful type system, but do
|
||||
want types.
|
||||
|
||||
My objections aren't unique. [[https://www.teamten.com/lawrence/writings/why-i-dont-like-go.html][Ten Reasons Why I Don't Like Golang]] and
|
||||
[[http://yager.io/programming/go.html][Why Go Is Not Good]] have criticisms I can't really disagree with.
|
||||
(Also, did you know someone put together
|
||||
https://github.com/ksimka/go-is-not-good?)
|
||||
|
||||
All in all, though, Go is a procedural/imperative language with a lot
|
||||
of good design in language and tooling... which is great, if it's only
|
||||
procedural/imperative you need.
|
||||
Reference in New Issue
Block a user