diff --git a/posts/2009-10-15-fun-with-nx-stuff.md b/posts/2009-10-15-fun-with-nx-stuff.md new file mode 100644 index 0000000..f3ebb37 --- /dev/null +++ b/posts/2009-10-15-fun-with-nx-stuff.md @@ -0,0 +1,89 @@ +--- +layout: post +title: Fun with NX stuff +tags: Technobabble +status: publish +type: post +published: true +meta: {} +--- +So, I was trying out various NX servers because I'd had very good luck with NX in the past and generally found it faster than VNC, RDP, or X11 over SSH. My options appeared to be: +- NoMachine's server ([here](http://www.nomachine.com/select-package.php?os=linux&id=1)), which is free-as-in-beer but supports only 2 simultaneous sessions. +- [FreeNX](http://freenx.berlios.de/) made from the components that NoMachine GPLed. It's open souce, but apparently is a total mess and notoriously hard to set up. However, it doesn't limit you to two sessions, as far as I know. +- [neatX](http://code.google.com/p/neatx/), implemented from scratch in Python/bash/C by Google for some internal project because apparently FreeNX was just too much of a mess. Like FreeNX, it lacks the two-session limitation; however, it doesn't handle VNC or RDP, only X11. + +NoMachine's server was a cinch to set up (at least on Fedora). The only thing I remember having to do is put my local hostname (idiotbox) in **/etc/hosts**. Performance was very good (though I haven't tried RDP or VNC over a slower link yet - only a LAN with VirtualBox's built-in RDP server). + +neatX was a bit tougher to set up, primarily because the documentation I saw was very sparse. This [blog post](http://people.binf.ku.dk/~hanne/b2evolution/blogs/index.php/2009/09/01/neatx-is-the-new-black) was helpful. It advised that you should make sure you could log in with SSH manually before checking anything else, which gave me a starting point for my problems. + +I took these notes on how I made it work: +- Install all of the dependencies it says. ALL OF THEM! +- Follow the other instructions in "INSTALL". +- Go to /usr/local/lib/neatx and run ./nxserver-login + + If it looks like this, you're probably good: + + [hodapp@idiotbox neatx]$ ./nxserver-login + HELLO NXSERVER - Version 3.3.0 - GPL + NX> 105 + + If not, you may need to install some dependencies or check paths of some things. If it complains about not being able to import neatx.app, add something like this to the top of `nxserver-login` (replacing that path with your own if needed, of course): + + {% highlight python %} + import sys + sys.path.append("/usr/local/lib/python2.6/site-packages") + {% endhighlight %} + +- Set up password-less login for user '**nx**' using something like `ssh-keygen -t rsa` and putting the private & public keys someplace easy to find. Check that this works properly from another host (i.e. put the public key in the server's **authorized_keys** file in `~nx/.ssh`, copy the private key to the client, and use `ssh -i blahblahprivatekey nx@server` there to log in. It should look something like this: + + chris@momentum:~$ ssh -i nx.key nx@10.1.1.40 + Last login: Sun Oct 11 13:11:49 2009 from 10.1.1.20 + HELLO NXSERVER - Version 3.3.0 - GPL + NX> 105 + + If it asks for a password, something's wrong. + + If it terminates the connection immediately, SSH is probably okay, but something server-side with neatX is still messed up. SSH logs can sometimes tell things. + +Once I'd done all this, neatX worked properly. However, I had some issues with it - for instance, sometimes the entire session quit accepting mouse clicks, certain windows quit accepting keyboard input, or things would turn very sluggish at random. But for the most part it worked well. + +After setting up SSH stuff, FreeNX server worked okay from Fedora's packages after some minor hackery (i.e. setting user the login shell for user '**nx**' to `/usr/libexec/nx/nxserver`. I haven't yet had a chance to test it over a slow link, whether with X11 or RDP or VNC, but it worked in a LAN just fine. Someone in the IRC channel on FreeNode assures me that it runs flawlessly over a 256 kilobit link. + +Then, for some reason I really don't remember, I decided I wanted to run all three servers at once on the same computer. As far as I know, all of the NX clients log in to the server initially by passing a private key for user '**nx**'. The server then runs the login shell set in **/etc/passwd** for **nx** - so I guess that shell determines which NX server handles the session. + +So, amidst a large pile of bad ideas, I finally came up with this workable idea for making the servers coexist: I would set the login shell to a wrapper script which would choose the NX server to then run. The only data I could think of that the NX client could pass to the server were the port number and the private key, and this wrapper script would somehow have to get this data. + +Utilizing the port number would probably involve hacking around with custom firewall rules or starting multiple SSH servers, so I opted to avoid this method. It turns out if you set `LogLevel` to `VERBOSE` in sshd_config (at least in my version), it'll have lines like this after every login from the NX client: +` +Oct 14 18:11:33 idiotbox sshd[15681]: Found matching DSA key: fd:e9:5d:24:59:3c:3c:35:c5:29:74:ef:6d:92:3c:e4 +` +You can get that key fingerprint with `ssh-keygen -lf foo.pub` where foo.pub is the public key. + +So I generated 3 keys (one for neatX, NoMachine's server, and FreeNX), added them all to **authorized_keys**, found the fingerprints, and ended up with a script that was something like this: +{% highlight bash %} +#!/bin/sh +FINGERPRINT=$(grep "Found matching RSA key" /var/log/secure | + tail -n 1 | egrep -o "(..:){15}..") +if [ $FINGERPRINT == "26:dd:67:82:c1:2d:cc:c0:c6:13:ac:d4:49:0e:79:a3" ]; then + SERVER="/usr/local/lib/neatx/nxserver-login-wrapper" +elif [ $FINGERPRINT == "35:fb:bd:45:c5:71:91:ce:d6:d9:7f:0b:dc:84:f4:b3" ]; then + SERVER="/usr/NX/bin/nxserver" +elif [ $FINGERPRINT == "b5:d7:a5:18:0d:c4:fa:18:19:58:20:00:1d:3b:3c:84" ]; then + SERVER="/usr/libexec/nx/nxserver" +fi +$SERVER +{% endhighlight %} + +I saved this someplace, set it executable, and set the login shell for **nx** in **/etc/passwd** to point to it. Make sure the home directory points someplace sensible too, as the install script for some NX servers are liable to point it somewhere else. But as far as I can tell, the only thing they use the home directories for is the **.ssh** directory and all the other data they save is in locations that do not conflict.So I copied the three public keys to the client and manually did `ssh -i blah.key nx@whatever` on each key. + + chris@momentum:~$ ssh -i freenx-key nx@10.1.1.40 + HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.3.0) + NX> 105 + chris@momentum:~$ ssh -i neatx-key nx@10.1.1.40 + HELLO NXSERVER - Version 3.3.0 - GPL + NX> 105 + chris@momentum:~$ ssh -i nomachine-key nx@10.1.1.40 + HELLO NXSERVER - Version 3.4.0-8 - LFE + NX> 105 + +The different versions in each reply were a good sign, so I tried the same keys in the client, and stuff indeed worked (at least according to my totally non-rigorous testing). Time will tell whether or not I completely overlooked some important details or interference. diff --git a/posts/2010-07-04-processing-dla-quadtrees.md b/posts/2010-07-04-processing-dla-quadtrees.md new file mode 100644 index 0000000..99b11e4 --- /dev/null +++ b/posts/2010-07-04-processing-dla-quadtrees.md @@ -0,0 +1,21 @@ +--- +layout: post +title: ! 'Processing: DLA, quadtrees' +tags: processing +status: publish +type: post +published: true +--- +I first dabbled with [Diffusion-Limited Aggregation](http://en.wikipedia.org/wiki/Diffusion-limited_aggregation) algorithms some 5 years back when I read about them in a book. The version I wrote was monumentally slow because it was a crappy implementation in a slow language for heavy computations (i.e. Python), but it worked well enough to create some good results like this: + +dla2c + +After about 3 or 4 failed attempts to optimize this program to not take days to generate images, I finally rewrote it reasonably successfully in [Processing](http://processing.org/) which I've taken a great liking to recently. I say "reasonably successfully" because it still has some bugs and because I can't seem to tune it to produce lightning-like images like this one, just much more dense ones. Annoyingly, I did not keep any notes about how I made this image, so I have only a vague idea. It was from the summer of 2005 in which I coded eleventy billion really cool little generative art programs, but took very sparse notes about how I made them. + +It was only a few hours of coding total. Part of why I like Processing is the triviality of adding interactivity to something, which I did repeatedly in order to test that the various building-blocks of the DLA implementation were working properly. + +The actual DLA applet is at [http://openprocessing.org/visuals/?visualID=10799](http://openprocessing.org/visuals/?visualID=10799). Click around inside it; right-click to reset it. The various building blocks that were put together to make this are: [here](http://openprocessing.org/visuals/?visualID=10794), [here](http://openprocessing.org/visuals/?visualID=10795), [here](http://openprocessing.org/visuals/?visualID=10796), [here](http://openprocessing.org/visuals/?visualID=10797), and [here](http://openprocessing.org/visuals/?visualID=10798). + +These are at OpenProcessing mostly because I don't know how to embed a Processing applet in Wordpress; perhaps it's better that I don't, since this one is a CPU hog. + +This blog also has an entire gallery of generative art with Processing that I think is great: [http://myecurve.wordpress.com/](http://myecurve.wordpress.com/) diff --git a/posts/2011-02-07-blender-from-a-recovering-pov-ray-user.md b/posts/2011-02-07-blender-from-a-recovering-pov-ray-user.md new file mode 100644 index 0000000..ae0fb8c --- /dev/null +++ b/posts/2011-02-07-blender-from-a-recovering-pov-ray-user.md @@ -0,0 +1,29 @@ +--- +layout: post +title: Blender from a recovering POV-Ray user +tags: CG +status: publish +type: post +published: true +--- +This is about the tenth time I've tried to learn [Blender](http://www.blender.org/). Judging by the notes I've accumulated so far, I've been at it this time for about a month and a half. From what I remember, what spurred me to try this time was either known-Blender-guru Craig from [Hive13](http://www.hive13.org/) mentioning [Voodoo Camera Tracker](http://www.digilab.uni-hannover.de/docs/manual.html) (which can output to a Blender-readable format), or my search for something that would make it easier to do the 2D visualizations and algorithmic art I always end up doing (and I heard Blender had some crazy node-based texturing system... + +Having a goal for what I want to render has been working out much better than just trying to learn the program and hope the inspiration falls into place (like it would appear all of my previous attempts involved). This really has nothing to do with Blender specifically, but really anything that is suitably complex and powerful. I have just had this dumb tendency in the past few years to try to learn all of the little details of a system without first having a motivation to use them, despite this being completely at odds with nearly all things I consider myself to have learned well. I'm seeing pretty clearly how that approach is rather backwards, for me at least. + +I took a lot of notes early on where I tried to map out a lot of its features at a very high level, but most of this simply didn't matter - what mattered mostly fell into place when I actually tried to make something in Blender. However, knowing some of the fundamental limitations and capabilities did help. + +The interface is quirky for sure, but I am finding it to be pretty intuitive after some practice. Most of my issues came from the big UI overhaul after 2.4, as I'm currently using 2.55/2.56 but many of the tutorials refer to the old version, and even official documentation for 2.5 is sometimes nonexistent - but can I really complain? They pretty clearly note that it is still in beta. + +However, I'm starting to make sense of it. Visions and concepts that I previously felt I had no idea how to even approach in Blender suddenly are starting to feel easy or at least straightforward (what I'm talking about more specifically here is how many things became trivial once I knew my way around Bezier splines). This is good, because I've got pages and pages of ideas waiting to be made. Some look like they'll be more suited to [Processing](http://processing.org/) (like the 2nd image down below) or [OpenFrameworks](http://www.openframeworks.cc/) or one of the too-many-completely-different-versions of Acidity I wrote. + +![What I learned Bezier splines on, and didn't learn enough about texturing.]({{ site.baseurl }}/assets/hive13-bezier03.png) + +![This was made directly from some equations. I don't know how I'd do this in Blender.]({{ site.baseurl }}/assets/20110118-sketch_mj2011016e.jpg) + +[POV-Ray](http://www.povray.org) was the last program that I 3D-rendered extensively in (this was mostly 2004-2005, as my much-neglected [DeviantArt](http://mershell.deviantart.com/) shows, and it probably stress-tested the Athlon64 in the first new machine I built more than any other program did). It's quite different from Blender in most ways possible. POV-Ray makes it easy to do clean, elegant, mathematical things, many of which would be either impossible or extremely ugly in Blender. It's a raytracer; it deals with neat, clean analytic surfaces, and tons of other things come for free (speed is not one of them). However, I never really found a modeler for POV-Ray that could integrate well with the full spectrum of features the language offered, and a lot of things just felt really kludgey. Seeing almost no progress made to the program, and being too lazy to look into [MegaPOV](http://megapov.inetart.net/), I decided to give up on it at some point. My attempts to learn something that implemented RenderMan resulted mostly in me seeing how ingeniously optimized and streamlined RenderMan is and not actually making anything in it. + +Blender feels really "impure" in comparison. It deals with ugly things like triangle meshes and scanline rendering... ugly things that make it vastly more efficient to accomplish many tasks. I'm quickly finding better replacements for a lot of the techniques I relied on with POV-Ray. For instance, for many repetitive or recursive structures, I would rely on some simple looping or recursion in POV-Ray (as its scene language was Turing-complete); this worked fairly well, but it also meant that no modeler I tried would be able to grok the scene. In Blender, I discovered the Array modifier; while it's much simpler, it is still very powerful. On top of this, I have the interactivity of the modeler still present. I've built some things interactively with all the precision that I would have had in POV-Ray, but I built them in probably 1/10 the time. That's the case for the two work-in-progress Blender images here: + +![This needs a name and a better background.]({{ site.baseurl }}/assets/20110131-mj20110114b.jpg) + +![This needs a name and a better background.]({{ site.baseurl }}/assets/20110205-mj20110202-starburst2.jpg) diff --git a/posts/2011-06-10-i-can-never-win-that-context-back.md b/posts/2011-06-10-i-can-never-win-that-context-back.md new file mode 100644 index 0000000..7cad140 --- /dev/null +++ b/posts/2011-06-10-i-can-never-win-that-context-back.md @@ -0,0 +1,27 @@ +--- +layout: post +title: I can never win that context back +tags: Journal, rant +status: publish +type: post +published: true +--- +I stumbled upon this: [http://www.soyoucode.com/2011/coding-giant-under-microscope-farbrausch](http://www.soyoucode.com/2011/coding-giant-under-microscope-farbrausch) . . . and promptly fell in love with the demos there from Farbrausch: + +[.the .product](http://www.youtube.com/watch?v=3ydAHt78v2M) + +[.debris](http://www.youtube.com/watch?v=rBNZ9JiFCKU) + +[.kkrieger](http://www.youtube.com/watch?v=3aV1kzS5FtA) + +[Magellan](http://www.youtube.com/watch?v=00SdDZyWSEs) + +That melding of music and animated 3D graphics grabs a hold of me like nothing else. I don't really know why. + +The fact that it's done in such a small space (e.g. 64 KB for the first one) makes it more impressive, of course. Maybe that should be a sad reflection on just how formulaic the things I like are, if they're encoded that small (although, that ignores just how much is present in addition, in the CPU and the GPU and the OS and the drivers and in the design of the computer), but I don't much care - formulas encode patterns of sorts, and we're pattern-matching machines. + +But leaving aside the huge programming feat of making all this fit in such a small space, I still find it really impressive. + +It's been a goal for awhile to make something that is on the scope of that (highly-compressed demo or not, I don't much care). I've just not made much progress to accomplishing that. My early attempts at Acidity were motivated by the same feelings that draw me to things like this. + +(Obligatory [Second Reality](http://www.youtube.com/watch?v=8G_aUxbbqWU) as well. Maybe I am putting myself too much in the context that it came from - i.e. 1993 and rather slow DOS machines - but I still think it's damn impressive. Incidentally, this is also one of the only ones I've run on real hardware before, since apparently the only fast machine I have that runs Windows is my work computer.) diff --git a/posts/2011-06-13-openframeworks-try-1.md b/posts/2011-06-13-openframeworks-try-1.md new file mode 100644 index 0000000..92694e2 --- /dev/null +++ b/posts/2011-06-13-openframeworks-try-1.md @@ -0,0 +1,43 @@ +--- +layout: post +title: OpenFrameworks, try 1... +tags: rant, Technobabble +status: publish +type: post +published: true +--- +My attempts at doing things with OpenFrameworks on MacOS X have been mildly disastrous. This is a bit of a shame, because I was really starting to like OpenFrameworks and it was not tough to pick up after being familiar with Processing. + +I'm pretty new to XCode, but it's the "official" environment for OpenFrameworks on OS X, so it's the first thing I tried. The first few attempts at things (whether built-in examples, or my own code) went just fine, but today I started trying some things that were a little more complex - i.e. saving the last 30 frames from the camera and using them for some filtering operations. My code probably had some mistakes in it, I'm sure, and that's to be expected. The part where things became incredibly stupid was somewhere around when the mistakes caused the combination of XCode, GDB, and OpenFrameworks to hose the system in various ways. + +First, it was the Dock taking between 15 and 30 seconds to respond just so I could force-quit the application. Then it was the debugger taking several seconds to do 100 iterations of a loop that had nothing more than an array member assignment inside of it (and it had 640x480x3 = 921,600 iterations total) if I tried to set breakpoints, thus basically making interactive debugging impossible. The debugging was already a pain in the ass; I had reduced some code down to something like this: +{% highlight c %} +int size = cam_width * cam_height * 3; +for(int i = 0; i < frame_count; ++i) { + unsigned char * blah = new unsigned char[size]; + for(int j = 0; j < size; ++j) blah[j] = 0; +} +{% endhighlight %} + +...after a nearly identical `memset` call was smashing the stack and setting `frame_count` to a value in the billions, so I was really getting quite frazzled at this. + +Running it a few minutes ago without breakpoints enabled led to a bunch of extreme sluggishness, then flickering and flashing on the monitor and I was not able to interact with anything in the GUI (which was the 3rd or 4th time this had happened today, with all the Code::Blocks nonsense below). I SSHed in from another machine and killed XCode, but the monitor just continued to show the same image, and it appeared that the GUI was completely unresponsive except for a mouse cursor. I had to hold the power button to reboot, and saw this in the Console but nothing else clear before it: + + 6/13/11 1:11:19 AM [0x0-0x24024].com.google.Chrome[295] [463:24587:11560062687119:ERROR:gpu_watchdog_thread.cc(236)] The GPU process hung. Terminating after 10000 ms. + +A little before trying XCode for a 2nd time, I had also attempted to set up Code::Blocks since it's OpenFrameworks' "official" IDE for Linux and Windows and XCode was clearly having . First I painstakingly made it built from an SVN copy and finally got it to run (had to disable FileManager and NassiShneiderman plugins which would not build and make sure it was building for the same architecture as wxWidgets was built for). As soon as I tried to quit it, the Dock became totally unresponsive, then Finder itself followed along with the menu bar for the whole system. I was not able to SSH in. Despite the system being mostly responsive, I had to hard reset. I found a few things in the console: + + 6/12/11 9:43:54 PM com.apple.launchd[1] (com.apple.coreservicesd[66]) Job appears to have crashed: Segmentation fault + 6/12/11 9:43:54 PM com.apple.audio.coreaudiod[163] coreaudiod: CarbonCore.framework: coreservicesd process died; attempting to reconnect but future use may result in erroneous behavior + 6/12/11 9:43:55 PM com.apple.ReportCrash.Root[18181] 2011-06-12 21:43:55.011 ReportCrash[18181:2803] Saved crash report for coreservicesd[66] version ??? (???) to /Library/Logs/DiagnosticReports/coreservicesd_2011-06-12-214355_localhost.crash + 6/12/11 9:44:26 PM com.apple.Dock.agent[173] Sun Jun 12 21:44:26 hodapple2.local Dock[173] Error: kCGErrorIllegalArgument: CGSSetWindowTransformsAtPlacement: Singular matrix at index 2: [0.000 0.000 0.000 0.000] + +It started up properly after a reset, but I couldn't do anything useful with it because despite there being a script that was supposed to take care of this while building the bundle the application was not able to see any of its plugins, which included a compiler plugin. I attempted a binary OS X release which had a functioning set of plugins, but was missing other dependencies set in the projects, which were Linux-specific. I could probably put together a working configuration if I worked in Code::Blocks a bit, but I have not tried yet. + +This is all incredibly annoying. There is no reason a user process should be capable of taking down the whole system like this, especially inside of a debugger, yet apparently it's pretty trivial to make this happen. I've written more than enough horrible code in various different environments (CUDA-GDB on a Tesla C1060, perhaps?) to know what to expect. I guess I can try developing on Linux instead, and/or using Processing. I know it's not quite the same, but I've never had a Processing sketch hose the whole system at least. + +*Later addition (2011-06-20, but not written here until November because I'd buried the notes somewhere):* + +I attempted to make an OpenFrameworks project built with Qt Creator (which of course uses [QMake](http://doc.qt.nokia.com/latest/qmake-manual.html). OpenFrameworks relies on QuickTime, and as it happens, QuickTime is 32-bit only. If you take a look at some of the headers, the majority of it is just #ifdef'ed away if you try to build 64-bit and this completely breaks the OpenFrameworks build. + +Ordinarily, this would not be an issue as I would just do a 32-bit build of everything else too. However, QMake refuses to do a 32-bit build on OS X for some unknown reason (and, yes, I talked to some Qt devs about this). It'll gladly do it on most other platforms, but not on OS X. Now, GCC has no problems building 32-bit, but this does no good when QMake keeps adding `-arch x86_64` no matter what. I attempted all sorts of options such as `CONFIG += x86`, `CONFIG -= x86_64`, `QMAKE_CXXFLAGS -= -arch x86_64`, or `+= -m32`, or `+= -arch i386`... but none of them to any avail. diff --git a/posts/2011-07-15-my-experiences-with-apache-axis2c.md b/posts/2011-07-15-my-experiences-with-apache-axis2c.md new file mode 100644 index 0000000..3de597b --- /dev/null +++ b/posts/2011-07-15-my-experiences-with-apache-axis2c.md @@ -0,0 +1,34 @@ +--- +layout: post +title: My experiences with Apache Axis2/C +tags: Project, rant, Technobabble +status: publish +type: post +published: true +--- +(This is an abridged version of a report I did at my job; I might post of copy of it once I remove anything that might be considered proprietary.) + +I was tasked at my job with looking at ways of doing web services in our main application (which for an upcoming delivery is to be separated out into client and server portions). Said application is written primarily in C++, so naturally our first look was into frameworks written for C or C++ so that we would not need to bother with language bindings, foreign function interfaces, porting, new runtimes, or anything of the sort. + +Our search led us to [Apache Axis2/C](http://axis.apache.org/axis2/c/core/). We'd examined this last year at a basic level and found that it looked suitable. Its primary intended purpose was as the framework that the client and server communicated over in order to transfer our various DTOs; that it worked over SOAP and handled most HTTP details (so it appeared) was a bonus. + +I discovered after investing considerable effort that we were quite wrong about Axis2/C. I'll enumerate a partial list of issues here: +- **Lack of support:** There was a distinct lack of good information online. I could find no real record of anyone using this framework in production anywhere. Mailing lists and message forums seemed nonexistent. I found a number of articles that were often pretty well-written, but almost invariably by WSO2 employees. +- **Development is largely stagnant:** The last update was in 2009. In and of itself this is not a critical issue, but combined with its extensive list of unsolved bugs and a very dense, undocumented code base, this is unacceptable. +- **Lack of documentation:** Some documentation is online, but the vast majority of the extensive API lacks any documentation, whether a formal reference or a set of examples. The most troubling aspect of this is that not even the developers of Axis2/C seemed to comprehend its memory management (and indeed our own tests showed some extensive memory leaks). +- **Large set of outstanding bugs:** When I encountered the bug-tracking website for Axis2/C (which I seem to have lost the link for), I discovered a multitude of troubling bugs. Most of them pertain to unfixed memory leaks (for code that will be running inside of a web server, this is really not good). On top of this, a 2-year-old unfixed bug had broken the functionality for binary MTOM transfers if you had enabled libcurl support, and this feature was rather essential to the application. +- **Necessity of repetitive code:** It lacked any production-ready means to automatically generate code for turning native C/C++ objects to and from SOAP. While it had WSDL2C, this still left considerable repetitive work for the programmer (in many cases causing more work rather than less) and its generated code was very ambiguous as to its memory-management habits. +- **Limited webserver support:** Axis2/C provided modules only for working with three web servers: Apache HTTPD, Microsoft IIS, and their built-in test server, *axis2_http_server*. Our intended target was Microsoft IIS, and the support for IIS was considerably less developed than the support for Apache HTTPD. To be honest, though, most of my woes came from Microsoft here - and the somewhat pathetic functionality for logging and configuration that IIS has. I'm sorry for anyone who loves IIS, but I should not be required to *manually search through a dump of Windows system calls* to determine that the reason for IIS silently failing is that I gave a 64-bit pool a 32-bit DLL, or that said DLL has unmet dependencies. Whether it's Axis2/C's fault or IIS's fault that the ISAPI DLL managed to either take IIS down or leave it an an indeterminate state no less than a hundred times doesn't much matter to me. *(However, on the upside, I did learn that [Process Monitor](http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) from Sysinternals can be very useful in cases where you have otherwise no real source of diagnostic information. This is not the first time I had to dump system calls to diagnose an Axis2/C problem.)* +- **Poor performance:** Even the examples provided in the Axis2/C source code itself had a tendency to fail to work properly. + - Their MTOM transfer example failed to work at all with Microsoft IIS and had horrid performance with Apache HTTPD. + - On top of this, the default configuration of Apache Axis2/C opens up a new TCP connection for every single request that is initiated. Each TCP connection, of course, occupies a port on the client side. On Windows, something like 240 seconds (by default) must pass upon that connection closing before the port may be reused; on Linux, I think it's 30 seconds. There are 16384 ports available for this purpose. Practical result of this: *A client with the default configuration of Axis2/C cannot sustain more than 68 requests per second on Windows or 273 requests per second on Linux.* If you exceed that rate, it will simply start failing. How did I eventually figure this out? By reading documentation carefully? By looking at an API reference? By looking at comments in the source code? No, *by looking at a packet dump in Wireshark,* which pointed out to me the steadily increasing port numbers and flagged that ports were being reused unexpectedly. I later found out that I needed to compile Axis2/C with libcurl support and then it would use a persistent HTTP connection (and also completely break MTOM support because of that unfixed bug I mentioned). None of this was documented anywhere, unless a cryptic mailing-list message from years ago counts. + +So, I'm sorry, esteemed employees of [WSO2](http://wso2.org/), but to claim that Apache Axis2/C is enterprise ready is a horrid mockery of the term. + +This concluded about 2 weeks of work on the matter. In approximately 6 hours (and I'll add that my starting point was knowing nothing about the Java technologies), I had a nearly identical version using Java web services (JAX-WS particularly) that was performing on the order of twice as fast and with none of the issues with memory leaks or stability. + +P.S. Is it unique to Windows-related forums that the pattern of support frequently goes like this? +- Me: This software is messed up. It's not behaving as it should. +- Them: It's not messed up; it works for me. You are just too dumb to use it. Try pressing this button, and it will work. +- Me: Okay, I pressed it. It's not working. +- Them: Oh. Your software is messed up. You should fix it. diff --git a/posts/2011-08-27-isolated-pixel-pushing.md b/posts/2011-08-27-isolated-pixel-pushing.md new file mode 100644 index 0000000..574dda5 --- /dev/null +++ b/posts/2011-08-27-isolated-pixel-pushing.md @@ -0,0 +1,33 @@ +--- +layout: post +title: Isolated-pixel-pushing +tags: CG, Project, Technobabble +status: publish +type: post +published: true +--- +After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes. + +I found [Fragmentarium](http://syntopia.github.com/Fragmentarium/index.html) first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I'd already realized could be used to do exactly what I was doing more slowly in [Processing](http://processing.org/), much more slowly in Python (stuff like [this](http://mershell.deviantart.com/gallery/#/dckzex) if we want to dig up things from 6 years ago), much more clunkily in C and [OpenFrameworks](http://www.openframeworks.cc/), and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment: + +![Standard trippy image]({{ site.baseurl }}/assets/isolated_pixel_pushing/acidity-standard.png) + +(Yeah, it's gaudy. But when you see it animated, it's amazingly trippy and mesmerizing.) + +The use I'm talking about (and that I've reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I'll have some other parameters in there that I'll bind to a time variable or some user control to animate things. So far I don't know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It's the basis of procedural texturing (as pioneered in [An image synthesizer](http://portal.acm.org/citation.cfm?id=325247) by Ken Perlin) as implemented in countless different forms like [Nvidia Cg](http://developer.nvidia.com/cg-toolkit), GLSL, probably Renderman Shading Language, RTSL, POV-Ray's extensive texturing, and Blender's node texturing system (which I'm sure took after a dozen other similar systems). [Adobe Pixel Bender](http://www.adobe.com/devnet/pixelbender.html), which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as [Vvvv](http://www.vvvv.org/) and [Quartz Composer](http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html) probably permit some similar operations; I don't know for sure. + +The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It's a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form. + +Also, once it's an algorithm, you can *parametrize it*. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters. + +The only thing different (and debatably so) that I'm doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime. + +However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why - his emerged more from the context of fractals and ray tracers on the GPU, like [Amazing Boxplorer](http://sourceforge.net/projects/boxplorer/), and fractals tend to make for very interesting results. + +His [Syntopia Blog](http://blog.hvidtfeldts.net/) has some fascinating material and beautiful renders on it. His posts on [Distance Estimated 3D Fractals](http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/) were particularly fascinating to me - in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to. + +Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it's ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael's series on Distance Estimated 3D Fractals links to [these slides](http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf) which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene. + +*(Later addition: [This link](http://www.mazapan.se/news/2010/07/15/gpu-ray-marching-with-distance-fields/) covers ray marching for some less fractalian uses. "Hypertexture" by Ken Perlin gives some useful information too, more technical in nature; finding this paper is up to you. Consult your favorite university?)* + +He has another rather different program called [Structure Synth](http://structuresynth.sourceforge.net/) which he made following the same "design grammar" approach of [Context Free](http://www.contextfreeart.org/). I haven't used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I'll cover this in another post. diff --git a/posts/2011-08-29-context-free.md b/posts/2011-08-29-context-free.md new file mode 100644 index 0000000..1522510 --- /dev/null +++ b/posts/2011-08-29-context-free.md @@ -0,0 +1,93 @@ +--- +layout: post +title: Context Free +tags: CG, Project, Technobabble +status: publish +type: post +published: true +--- +My [last post](http://hodapple.com/blag/isolated-pixel-pushing/) mentioned a program called [Context Free](http://www.contextfreeart.org/) that I came across via the [Syntopia](http://blog.hvidtfeldts.net/) blog as his program [Structure Synth](http://structuresynth.sourceforge.net/) was modeled after it. + +I've heard of [context-free grammars](http://en.wikipedia.org/wiki/Context-free_grammar) before but my understanding of them is pretty vague. This program is based around them and the documentation expresses their [limitations](http://www.contextfreeart.org/mediawiki/index.php/Context_Free_cans_and_cannots); what I grasped from this is that no entity can have any "awareness" of the context in which it's drawn, i.e. any part of the rest of the scene or even where in the scene it is. A perusal of the site's [gallery](http://www.contextfreeart.org/gallery/) shows how much those limitations don't really matter. + +I downloaded the program, started it, and their welcome image (with the relatively short source code right beside it) greeted me, rendered on-the-spot: + + + +The program was very easy to work with. Their quick reference card was terse but only needed a handful of examples and a few pages of documentation to fill in the gaps. After about 15 minutes, I'd put together this: + + + +Sure, it's mathematical and simple, but I think being able to put it together in 15 minutes in a general program (i.e. not a silly ad-hoc program) that I didn't know how to use shows its potential pretty well. The source is this: + +{% highlight bash %} +startshape MAIN +background { b -1 } +rule MAIN { + TRAIL { } +} +rule TRAIL { + 20 * { r 11 a -0.6 s 0.8 } COLORED { } +} +rule COLORED { + BASE { b 0.75 sat 0.1 } +} +rule BASE { + SQUARE1 { } + SQUARE1 { r 90 } + SQUARE1 { r 180 } + SQUARE1 { r 270 } +} +rule SQUARE1 { + SQUARE { } + SQUARE1 { h 2 sat 0.3 x 0.93 y 0.93 r 10 s 0.93 } +} +{% endhighlight %} + +I worked with it some more the next day and had some things like this: + + + +I'm not sure what it is. It looks sort of like a tree made of lightning. Some Hive13 people said it looks like a lockpick from hell. The source is some variant of this: + +{% highlight bash %} +startshape MAIN +background { b -1 } +rule MAIN { + BRANCH { r 180 } +} +rule BRANCH 0.25 { + box { } + BRANCH { y -1 s 0.9 } +} +rule BRANCH 0.25{ + box { } + BRANCH { y -1 s 0.3 } + BRANCH { y -1 s 0.7 r 52 } +} +rule BRANCH 0.25 { + box { } + BRANCH { y -1 s 0.3 } + BRANCH { y -1 s 0.7 r -55 } +} +path box { + LINEREL{x 0 y -1} + STROKE{p roundcap b 1 } +} +{% endhighlight %} + +The program is very elegant in its simplicity. At the same time, it's a really powerful program. Translating something written in Context Free into another programming language would in most cases not be difficult at all - you need just a handful of 2D drawing primitives, a couple basic operations for color space and geometry, the ability to recurse (and to stop recursing when it's pointless). But that representation, though it might be capable of a lot of things that Context Free can't do on its own, probably would be a lot clumsier. + +This is basically what some of my OpenFrameworks sketches were doing in a much less disciplined way (although with the benefit of animation and GPU-accelerated primitives) but I didn't realize that what I was doing could be expressed so easily and so compactly in a context-free grammar. + +It's appealing, though, in the same way as the functions discussed in the last post (i.e. those for procedural texturing). It's a similarly compact representation of an image - this time, a vector image rather than a spatially continuous image, which has some benefits of its own. It's an algorithm - so now it can be parametrized. (Want to see one reason why parametrized vector things are awesome? Look at [Magic Box](http://magic-box.org/).) And once it's parametrized, animation and realtime user control are not far away, provided you can render quickly enough. + +*(And as [@codersandy](http://twitter.com/#!/codersandy/statuses/108180159194079232) observed after reading this, [POV-Ray](http://www.povray.org/) is in much the same category too. I'm not sure if he meant it in the same way I do, but POV-Ray is a fully Turing-complete language and it permits you to generate your whole scene procedurally if you wish, which is great - but Context Free is indeed far simpler than this, besides only being 2D. It will be interesting to see how Structure Synth compares, given that it generates 3D scenes and has a built-in raytracer.)* + +My next step is probably to play around with [Structure Synth](http://structuresynth.sourceforge.net/) (and like Fragmentarium it's built with Qt, a library I actually am familiar with). I also might try to create a JavaScript implementation of Context Free and conquer my total ignorance of all things JavaScript. Perhaps a realtime OpenFrameworks version is in the works too, considering this is a wheel I already tried to reinvent once (and badly) in OpenFrameworks. + +Also in the queue to look at: +* [NodeBox](http://nodebox.net/code/index.php/Home), "a Mac OS X application that lets you create 2D visuals (static, animated or interactive) using Python programming code..." +* [jsfiddle](http://jsfiddle.net/), a sort of JavaScript/HTML/CSS sandbox for testing. (anarkavre showed me a neat sketch he put together [here](http://jsfiddle.net/anarkavre/qVVuD/)) +* [Paper.js](http://paperjs.org/), "an open source vector graphics scripting framework that runs on top of the HTML5 Canvas." +* Reading [generative art](http://www.manning.com/pearson/) by Matt Pearson which I just picked up on a whim. diff --git a/posts/2011-11-13-qmake-hackery-dependencies-external-preprocessing.md b/posts/2011-11-13-qmake-hackery-dependencies-external-preprocessing.md new file mode 100644 index 0000000..f82af7f --- /dev/null +++ b/posts/2011-11-13-qmake-hackery-dependencies-external-preprocessing.md @@ -0,0 +1,244 @@ +--- +layout: post +title: QMake hackery: Dependencies & external preprocessing +tags: Project, Technobabble +status: publish +type: post +published: true +--- +* TODO: Put the code here into a Gist? + +[Qt Creator](http://qt-project.org/wiki/Category:Tools::QtCreator) is a favorite IDE of mine for when I have to deal with miserably large C++ projects. At my job I ported a build in Visual Studio of one such large project over to Qt Creator so that builds and development could be done on OS X and Linux, and in the process, learned a good deal about [QMake](http://doc.qt.nokia.com/latest/qmake-manual.html) and how to make it do some unexpected things. + +While I find Qt Creator to be a vastly cleaner, lighter IDE than Visual Studio, and find QMake to be a far more straightforward build system for the majority of things than Visual Studio's build system, some things the build needed were very tricky to set up in QMake. The two main shortcomings I ran into were: +* Managing dependencies between projects, as building the application in question involved building 40-50 separate subprojects as libraries, many of which depended on each other. +* Having external build events, as the application also had to call an external tool (no, not **moc**, this is different) to generate some source files and headers from a series of templates. + +QMake, as it happens, has some commands that actually make the project files Turing-complete, albeit in a rather ugly way. The **eval** command is the main source of this, and I made heavy use of it. + +First is the dependency management system. It's a little large, but I'm including it inline here. + +{% highlight bash %} +# This file is meant to be included in from other project files, but it needs +# a particular context: +# (1) Make sure that the variable TEMPLATE is set to: subdirs, lib, or app. +# Your project file really should be doing this anyway. +# (2) Set DEPENDS to a list of dependencies that must be linked in. +# (3) Set DEPENDS_NOLINK to a list of dependencies from which headers are +# needed, but which are not linked in. (Doesn't matter for 'subdirs' +# template) +# (4) Make sure BASEDIR is set. +# +# This script may modify SUBDIRS, INCLUDEPATH, and LIBS. It should always add, +# not replace. +# It will halt execution if BASEDIR or TEMPLATE are not set, or if DEPENDS or +# DEPENDS_NOLINK reference something not defined in the table. +# +# Order does matter in DEPENDS for the "subdirs" template. Items which come +# first should satisfy dependencies for items that come later. +# You'll often see: +# include ($$(BASEDIR)/qmakeDefault.pri) +# which includes this file automatically. +# +# -CMH 2011-06 + +# ---------------------------------------------------------------------------- +# Messages and sanity checks +# ---------------------------------------------------------------------------- +message("Included Dependencies.pro!") +message("Dependencies: " $$DEPENDS) +message("Dependencies (INCLUDEPATH only): " $$DEPENDS_NOLINK) +#message("TEMPLATE is: " $$TEMPLATE) + +isEmpty(BASEDIR) { + error("BASEDIR variable is empty here. Make sure it is set!") +} +isEmpty(TEMPLATE) { + error("TEMPLATE variable is empty here. Make sure it is set!") +} + +# ---------------------------------------------------------------------------- +# Table of project locations +# ---------------------------------------------------------------------------- + +# Some common locations, here only to shorten descriptions in the _PROJ table. +_PROJECT1 = $$BASEDIR/SomeProject +_PROJECT2 = $$BASEDIR/SomeOtherProject +_DEPENDENCY = $$BASEDIR/SomeDependency + +# Table of project file locations +# (Include paths are also generated based off of these) +_PROJ.FooLib = $$_PROJECT1/Libs/FooLib +_PROJ.BarLib = $$_PROJECT1/Libs/BarLib +_PROJ.OtherStuff = $$_PROJECT2/Libs/BarLib +_PROJ.MoreStuff = $$_PROJECT2/Libs/BarLib +_PROJ.ExternalLib = $$BASEDIR/SomeLibrary + +# ---------------------------------------------------------------------------- +# Iterate over dependencies and update variables, as appropriate for the given +# template type +# ---------------------------------------------------------------------------- + +# _valid is a flag telling whether TEMPLATE has matched anything yet +_valid = false + +contains(TEMPLATE, "subdirs") { + for(dependency, DEPENDS) { + # Look for an item like: _PROJ.(dependency) + + # Disclaimer: I wrote this and it works. I have no idea why precisely + # why it works. However, I repeat the pattern several times. + eval(_dep = $$"_PROJ.$${dependency}") + isEmpty(_dep) { + error("Unknown dependency " $${dependency} "!") + } + + # If that looks okay, then update SUBDIRS. + eval(SUBDIRS += $$"_PROJ.$${dependency}") + } + message("Setting SUBDIRS=" $$SUBDIRS) + _valid = true +} + +contains(TEMPLATE, "app") | contains(TEMPLATE, "lib") { + # Loop over every dependency listed in DEPENDS. + for(dependency, DEPENDS) { + # Look for an item like: _PROJ.(dependency) + eval(_dep = $$"_PROJ.$${dependency}") + isEmpty(_dep) { + error("Unknown dependency " $${dependency} "!") + } + + # If that looks okay, then update both INCLUDEPATH and LIBS. + eval(INCLUDEPATH += $$"_PROJ.$${dependency}"/include) + eval(LIBS += -l$${dependency}$${LIBSUFFIX}) + } + for(dependency, DEPENDS_NOLINK) { + # Look for an item like: _PROJ.(dependency) + eval(_dep = $$"_PROJ.$${dependency}") + isEmpty(_dep) { + error("Unknown dependency " $${dependency} "!") + } + + # If that looks okay, then update INCLUDEPATH. + eval(INCLUDEPATH += $$"_PROJ.$${dependency}"/include) + } + #message("Setting INCLUDEPATH=" $$INCLUDEPATH) + #message("Setting LIBS=" $$LIBS) + _valid = true +} + +# If no template type has matched, throw an error. +contains(_valid, "false") { + error("Don't recognize template type: " $${TEMPLATE}) +} +{% endhighlight %} + +It's been sanitized heavily to remove all sorts of details from the huge project it was taken from. Mostly, you need to add your dependent projects into the "Table of Project Locations" section, and perhaps make another file that set up the necessary variables mentioned at the top. Then set the **DEPENDS** variable to a list of project names, and then include this QMake file from all of your individual projects (it may be necessary to include it pretty close to the top of the file). + +In general, in this large application, each sub-project had two project files: +* One with **TEMPLATE = lib** (a few were **app** instead as well). This is the project file that is included in as a dependency from any project that has **TEMPLATE = subdirs**, and this project file makes use of the QMake monstrosity above to set up the include and library paths for any dependencies. +* One with **TEMPLATE = subdirs**. The same QMake monstrosity is used here to include in the project files (of the sort in #1) of dependencies so that they are built in the first place, and permit you to build the sub-project standalone if needed. + +...and both are needed if you want to be able to build sub-project independently and without making to take care of dependencies individually. + +The next project down below sort of shows the use of that QMake monstrosity above, though in a semi-useless sanitized form. Its purpose is to show another system, but I'll explain that below it. + +{% highlight bash %} +QT -= gui +QT -= core +TEMPLATE = lib + +## Include our qmake defaults +DEPENDS = FooLib BarLib +include ($$(BASEDIR)/qmakeDefault.pri) + +TARGET = Project$${LIBSUFFIX} +LIBS += -llua5.1 -lrt -lLua$${LIBSUFFIX} +DEFINES += PROJECT_EXPORTS + +INCLUDEPATH += /usr/include/lua5.1 + ./include + +HEADERS += include/SomeHeader.h + include/SomeOtherHeader.h + +SOURCES += source/SomeClass.cpp + source/SomeOtherClass.cpp + +# The rest of this is done with custom build steps: +GENERATOR_INPUTS = templates/TemplateFile.ext + templates/OtherTemplate.ext + +gen.input = GENERATOR_INPUTS +gen.commands = $${DESTDIR}/generator -i $${QMAKE_FILE_IN} +# -s source$(InputName).cpp -h include$(InputName).h + +# Set the destination of the source and header files. +SOURCE_DIR = "source/" +HEADER_DIR = "include/" +# What prefix and suffix to replace with paths and .h.cpp, respectively. +TEMPLATE_PREFIX = "external/" +TEMPLATE_EXTN = ".ext" + +# +# Warning: Here be black magic. +# +# We need to use QMAKE_EXTRA_COMPILERS but its functionality does not give us +# an easy way to explicitly specify the names of multiple output files with a +# single QMAKE_EXTRA_COMPILERS entry. So, we get around this by making one +# entry for each input template (the .ext files). +# The part where this gets tricky is that each entry requires a unique +# variable name, so we must create these variables dynamically, which would +# be impossible in QMake ordinarily since it does only a single eval pass. +# Luckily, QMake has an eval(...) command which explicitly performs an eval +# pass on a string. We repeatedly use constructs like this: +# $$CONTENTS = "Some string data" +# $$VARNAME = "STRING" +# eval($$VARNAME = $$CONTENTS) +# These let us dynamically define variables. For sanity, I've tried to use a +# suffix of _VARNAME on any variable which contains the name of another +# variable. +# + +# Iterate over every filename in GENERATOR_INPUTS +for(templatefile, GENERATOR_INPUTS) { + # Generate the name of the header file. + H1 = $$replace(templatefile, $$TEMPLATE_PREFIX, $$HEADER_DIR) + HEADER = $$replace(H1, $$TEMPLATE_EXTN, ".h") + # Generate the name of the source file. + S1 = $$replace(templatefile, $TEMPLATE_PREFIX, $$SOURCE_DIR) + SOURCE = $$replace(S1, $$TEMPLATE_EXTN, ".cpp") + # Generate unique variable name to populate & pass to QMAKE_EXTRA_COMPILERS + QEC_VARNAME = $$replace(templatefile, ".", "") + QEC_VARNAME = $$replace(QEC_VARNAME, "/", "") + VARNAME = $$replace(QEC_VARNAME, "\", "") + # Append _INPUT to generate another variable name for the input filename + INPUT_VARNAME = $${QEC_VARNAME}_INPUT + eval($${INPUT_VARNAME} = $$templatefile) + + # Now generate an entry to pass to QMAKE_EXTRA_COMPILERS. + eval($${VARNAME}.commands = $${DESTDIR}/generator -i ${QMAKE_FILE_IN} -s ${QMAKE_FILE_OUT} -h $${HEADER}) + eval($${VARNAME}.name = $$VARNAME) + # ACHTUNG! The 'input' field is the _variable name_ which contains the + # input filename, not the filename itself. If you put in a filename or + # either of those variables don't exist, this will fail, silently, and + # all attempts at diagnosis will lead you nowhere. + eval($${VARNAME}.input = $${INPUT_VARNAME}) + eval($${VARNAME}.output = $${SOURCE}) + eval($${VARNAME}.variable_out = SOURCES) + + # Now tell QMake to actually do this step we meticulously built. + eval(QMAKE_EXTRA_COMPILERS += $$VARNAME) + # Also add our header files. I doubt it's really necessary, but here it is. + HEADERS += $${HEADER} +} +{% endhighlight %} + +This one uses a bit more black magic. The entire **GENERATOR_INPUTS** list is a set of files that are inputs to an external program that is called to generate some code, which then must be built with the rest of the project. This uses undocumented QMake features, and a couple kludges to generate some things dynamically (i.e. the filenames of the generated code) from a variable-length list. I highly recommend avoiding it. However, it does work. + +These two links proved indispensable in the creation of this: + +[QMake Variable Reference](http://qt-project.org/doc/qt-4.8/qmake-variable-reference.html) + +[Undocumented qmake](http://www.qtcentre.org/wiki/index.php?title=Undocumented_qmake) diff --git a/posts/2011-11-24-obscure-features-of-jpeg.md b/posts/2011-11-24-obscure-features-of-jpeg.md new file mode 100644 index 0000000..e40bb03 --- /dev/null +++ b/posts/2011-11-24-obscure-features-of-jpeg.md @@ -0,0 +1,120 @@ +--- +layout: post +title: Obscure features of JPEG +tags: image compression, images, jpeg, Technobabble +status: publish +type: post +published: true +--- + +*(This is a modified version of what I wrote up at work when I saw that progressive JPEGs could be nearly a drop-in replacement that offered some additional functionality and ran some tests on this.)* + +Introduction +============ + +The long-established JPEG standard contains a considerable number of features that are seldom-used and sometimes virtually unknown. This all is in spite of the widespread use of JPEG and the fact that every JPEG decoder I tested was compatible with all of the features I will discuss, probably because [IJG libjpeg](http://www.ijg.org/) (or [this](http://www.freedesktop.org/wiki/Software/libjpeg)) runs basically everywhere. + +Progressive JPEG +================ +One of the better-known features, though still obscure, is that of progressive JPEGs. Progressive JPEGs contain the data in a different order than more standard (sequential) JPEGs, enabling the JPEG decoder to produce a full-sized image from just the beginning portion of a file (at a reduced detail level) and then refine those details as more of the file is available. + +This was originally made for web usage over slow connections. While it is rarely-used, most modern browsers support this incremental display and refinement of the image, and even those applications that do not attempt this support still are able to read the full image. + +Interestingly, since the only real difference between a progressive JPEG and a sequential one is that the coefficients come in a different order, the conversion between progressive and sequential is lossless. Various lossless compression steps are applied to these coefficients and as this reordering may permit a more efficient encoding, a progressive JPEG often is smaller than a sequential JPEG expressing an identical image. + +One command I've used pretty frequently before posting a large photo online is: + + jpegtran -optimize -progressive -copy all input.jpg > output.jpg + +This losslessly converts *input.jpg* to a progressive version and optimizes it as well. (*jpegtran* can do some other things losslessly as well - flipping, cropping, rotating, transposing, converting to greyscale.) + +Multi-scan JPEG +=============== +More obscure still is that progressive JPEG is a particular case of something more general: a **multi-scan JPEG**. + +Standard JPEGs are single-scan sequential: All of the data is stored top-to-bottom, with all of the color components and coefficients together and in full. This includes, per **MCU** (minimum coded unit, an 8x8 pixel square or some small multiple of it), 64 coefficients each for each one of the 3 color components (typically Y,Cb,Cr). The coefficients are from an 8x8 DCT transform matrix, but they are stored in a zigzag order that preserves locality with regard to spatial frequency as this permits more efficient encoding. The first coefficient (0) is referred to as the DC coefficient; the others (1-63) are AC coefficients. + +Multi-scan JPEG permits this information to be packed in a fairly arbitrary way (though with some restrictions). While information is still stored top-to-bottom, it permits for only some of the data in each MCU to be given, with the intention being that later scans will provide other parts of this data (hence the name multi-scan). More specifically: +* The three color components (Y for grayscale, and Cb/Cr for color) may be split up between scans. +* The 64 coefficients in each component may be split up. *(Two restrictions apply here for any given scan: the DC coefficient must always precede the AC coefficients, and if only AC coefficients are sent, then they may only be for one single color component.)* +* Some bits of the coefficients may be split up. *(This, too, is subject to a restriction, not to a given scan but to the entire image: You must specify some of the DC bits. AC bits are all optional. Information on how many bits are actually used here is almost nonexistent.)* + +In other words: +* You may leave color information out to be added later. +* You may let spatial detail be only a low-frequency approximation to be refined later with higher-frequency coefficients. (As far as I can tell, you cannot consistently reduce grayscale detail beyond the 8x8 pixel MCU while still recovering that detail in later scans.) +* You may leave grayscale and color values at a lower precision (i.e. coarsely quantized) to have more precision added later. +* You may do all of the above in almost any order and almost any number of steps. + +Your libjpeg distribution probably contains something called **wizard.txt** someplace (say, /usr/share/docs/libjpeg8a or /usr/share/doc/libjpeg-progs); I don't know if an online copy is readily available, but mine is [here]({{ site.baseurl }}/assets/obscure_jpeg_features/libjpeg-wizard.txt). I'll leave detailed explanation of a scan script to the "Multiple Scan / Progression Control" section of this document, but note that: +* Each non-commented line corresponds to one scan. +* The first section, prior to the colon, specifies which plane to send, Y (0), Cb (1), or Cr (2). +* The two fields immediately after the colon give the first and last indices of coefficients from that plane that should be in the scan. Those indices are from 0 to 63 in zigzag order; 0 = DC, 1-63 = AC in increasing frequency. +* The two fields immediately after those specify which bits of those coefficients this scan contains. + +According to that document, the standard script for a progressive JPEG is this: + + # Initial DC scan for Y,Cb,Cr (lowest bit not sent) + 0,1,2: 0-0, 0, 1 ; + # First AC scan: send first 5 Y AC coefficients, minus 2 lowest bits: + 0: 1-5, 0, 2 ; + # Send all Cr,Cb AC coefficients, minus lowest bit: + # (chroma data is usually too small to be worth subdividing further; + # but note we send Cr first since eye is least sensitive to Cb) + 2: 1-63, 0, 1 ; + 1: 1-63, 0, 1 ; + # Send remaining Y AC coefficients, minus 2 lowest bits: + 0: 6-63, 0, 2 ; + # Send next-to-lowest bit of all Y AC coefficients: + 0: 1-63, 2, 1 ; + # At this point we've sent all but the lowest bit of all coefficients. + # Send lowest bit of DC coefficients + 0,1,2: 0-0, 1, 0 ; + # Send lowest bit of AC coefficients + 2: 1-63, 1, 0 ; + 1: 1-63, 1, 0 ; + # Y AC lowest bit scan is last; it's usually the largest scan + 0: 1-63, 1, 0 ; + +And for standard, sequential JPEG it is: + + 0 1 2: 0 63 0 0; + +In [this image]({{ site.baseurl }}/assets/obscure_jpeg_features/20100713-0107-interleave.jpg) I used a custom scan script that sent all of the Y data, then all Cb, then all Cr. Its custom scan script was just this: + + 0; + 1; + 2; + +While not every browser may do this right, most browsers will render the greyscale as its comes in, then add color to it one plane at a time. It'll be more obvious over a slower connection; I purposely left the image fairly large so that the transfer would be slower. You'll note as well that the greyscale arrives much more slowly than the color. + +Code & Utilities +==================== +The **cjpeg** tool from libjpeg will (among other things) create a JPEG using a custom scan script. Combined with ImageMagick, I used a command like: + + convert input.png ppm:- | cjpeg -quality 95 -optimize -scans scan_script > output.jpg + +Or if the input is already a JPEG, **jpegtran** will do the same thing, losslessly (as it's merely reordering coefficients): + + jpegtran -scans scan_script input.jpg > output.jpg + +libjpeg has some interesting features as well. Rather than decoding an entire full-resolution JPEG and then scaling it down, for instance (a common use case when generating thumbnails), you may set it up when decoding so that it will simply do the reduction for you while decoding. This takes less time and uses less memory compared with getting the full decompressed version and resampling afterward. + +The following C code, based loosely on **example.c** from libjpeg, will split up a multi-scan JPEG into a series of numbered PPM files, each one containing a scan. Look for **cinfo.scale_num** (circa lines 67, 68) to use the fast scaling features mentioned in the last paragraph, and note that the code only processes as much input JPEG as it needs for the next scan. (It needs nothing special to build besides a functioning libjpeg. *gcc -ljpeg -o jpeg_split.o jpeg_split.c* works for me.) + +{% gist 9220146 %} + +Examples +======== + +Here are all 10 scans from a standard progressive JPEG, separated out with the example code: + +![Scan 1]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto1.png) +![Scan 2]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto2.png) +![Scan 3]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto3.png) +![Scan 4]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto4.png) +![Scan 5]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto5.png) +![Scan 6]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto6.png) +![Scan 7]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto7.png) +![Scan 8]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto8.png) +![Scan 9]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto9.png) +![Scan 10]({{ site.baseurl }}/assets/obscure_jpeg_features/cropphoto10.png) diff --git a/posts/2012-08-16-some-thoughts.md b/posts/2012-08-16-some-thoughts.md new file mode 100644 index 0000000..c4755f0 --- /dev/null +++ b/posts/2012-08-16-some-thoughts.md @@ -0,0 +1,55 @@ +--- +layout: post +title: Thoughts on tools, design, and feedback loops +tags: rant, Technobabble +status: publish +type: post +published: true +--- +I just watched [Inventing on Principle](https://vimeo.com/36579366) from Bret Victor and found this entire talk incredibly interesting. Chris Granger's [post](http://www.chris-granger.com/2012/04/12/light-table---a-new-ide-concept/) on Light Table led me to this, and shortly after, I found the redesigned [Khan Academy CS course](http://ejohn.org/blog/introducing-khan-cs) which this inspired. Bret touched on something that basically anyone who's attempted to design anything has implicitly understood: **This feedback loop is the most essential part of the process.** + +I reflected on this and on my own experiences, and decided on a few things: + +**(1) Making that feedback loop fast enough can dramatically change the design process, not just speed it up proportionally.** + +I feel that Bret's video demonstrates this wonderfully. It matches up with something I've believed for awhile: That a slower, more delay-prone process becoming fast enough to be interactive can change the entire way a user relates to it. The change, for me at least, can be as dramatic as between filling out paperwork and having a face-to-face conversation. This metamorphosis is where I see a tool become an extension of the mind. + +[Toplap](http://toplap.org/index.php?title=Main_Page) probably has something to say on this. They link to a \[short\] live coding documentary, [Show Us Your Screens](https://vimeo.com/20241649). I rather like their quote: **"Live coding is not about tools. [Algorithms are thoughts. Chainsaws are tools.](https://vimeo.com/9790850) That's why algorithms are sometimes harder to notice than chainsaws."** + +Live coding perhaps hits many of Bret's points from the angle of musical performance meeting programming. Since he spoke directly of improvisation, I'd say he was well aware of this connection. + +**(2) These dynamic, interactive, high-level tools don't waste computer resources - they trade them.** + +They trade them for being dynamic, interactive, and high-level, and this very often means that they trade ever-increasing computer resources to earn some ever-limited human resources like time, comprehension, and attention. + +I don't look at them as being resource-inefficient. I look at them as being the wrong tool for those situations where I have no spare computer resources to trade. Frankly, those situations are exceedingly rare. (And my degree is in electrical engineering. Most coding I've done when acting as a EE guy, I've done with the implicit assumption that no other type of situation existed.) Even if I eventually have to produce something for such a situation - say, to target a microcontroller - I still have ever-increasing computer resources at my disposal, and I can utilize these to great benefit for some prototyping. + +Limited computer resources restrict an implementation. Limited human resources, like time and attention and comprehension, do the same... + +**(3) The choice of tools defines what ideas are expressible.** + +Any Turing-complete language can express a given algorithm, pretty much by definition. However, since this expression can vary greatly in length and in conciseness, this is really only of theoretical interest if you, a human, have only finite time on earth to make this expression and only so many usable hours per day. (This is close to a point Paul Graham is [quite](http://paulgraham.com/langdes.html) [fond](http://paulgraham.com/power.html) of [making](http://paulgraham.com/avg.html).) + +This same principle goes for all other sorts of expressions and interactions and interfaces, non-Turing-complete included, anytime different tools are capable of producing the same result given enough work. (I can use a text editor to generate music by making PCM samples by hand. I can use a program to make an algorithm to do the same. I can use a program such as Ableton Live to do the same. These all can produce sound, but some of them are a path of insurmountable complexity depending on what sort of sound I want.) + +In a strict way, the choice of tools defines the minimum size of an expression of an idea, and how comprehensible and difficult this expression is. Once this expression hits a certain level of complexity, a couple paths emerge: it may as well be impossible to implement, or it may cease to be about the idea and instead be an implementation of a set of ad-hoc tools to eventually implement that idea. ([Greenspun's tenth rule](https://en.wikipedia.org/wiki/Greenspun%27s_Tenth_Rule), dated as it is, indicates plenty of other people have observed this.) + +In a less strict way, the choice of tools also guides how a person expresses an idea; not like a fence, but more like a wind. It guides how that person thinks. + +The boundaries that restrict **time** and **effort** also draw the lines that divide ideas into **possible** and **impossible**. Tools can move those lines. The right tools solve the irrelevant problems, and guide the user into solving relevant problems instead. + +Of course, finding the relevant problems can be tricky... + +**(4) When exploring, you are going to re-implement ideas. Get over it.** + +(I suppose [Mythical Man Month](http://c2.com/cgi/wiki?PlanToThrowOneAway) laid claim to something similar decades ago.) + +Turning an idea plus a bad implementation into a good implementation, on the whole, is far easier than turning just an idea into any implementation (and pages upon pages of design documentation rarely push it past 'just an idea'). It's not an excuse to willingly make bad design decisions - it's an acknowledgement that a tangible form of an idea does far more to clarify and refine those design decisions than any amounts of verbal descriptions and diagrams and discussions. Even if that prototype is scrapped in its entirety, the insight and experiences it gives is not. + +The flip side of this is: **Ideas are fluid, and this is good**. Combined with the second point, it's more along the lines of: **Ideas are fluid, provided they already have something to flow from.** + +A high-level expression with the right set of primitives is a description that translates very readily to other forms. The key here is not what language or tool it is, but that it supports the right vocabulary to express the implementation concisely. **Supports** doesn't mean that it has all the needed high-level constructs - just that it is sufficiently flexible and concise to build them readily. (If you 'hide' higher-level structure inside lower-level details, you've added extra complexity. If you abuse higher-level constructs that hide simpler relationships, you've done the same. More on that in another post...) + +My beloved C language, for instance, gives some freedom to build a lot of constructs, but mainly those constructs that still map closely to assembly language and to hardware. C++ tries a little harder, but I feel like those constructs quickly hit the point of appalling, fragile ugliness. Languages like Lisp, Scheme, Clojure, Scala, and probably Haskell (I don't know yet, I haven't attempted to master it) are fairly well unmatched in the flexibility they give you. However, in light of Bret's video, the way these are all meant to be programmed still can fall quite short. + +I love [Context Free](http://www.contextfreeart.org/) as well. I like it because its relative speed combined with some marvelous simplicity gives me the ability to quickly put together complex fractalian/mathematical/algorithmic images. Normal behavior when I work with this program is to generate several hundred images in the course of an hour, refining each one from the last. Another big reason it appeals to me is that, due to its simplicity, I could fairly easily take the Context Free description of any of these images and turn it into some other algorithmic representation (such as a recursive function call to draw some primitives, written in something like [Processing](http://www.processing.org/) or [openFrameworks](http://www.openframeworks.cc/) or HTML5 Canvas or OpenGL). diff --git a/posts/2014-02-06-hello-world.md b/posts/2014-02-06-hello-world.md new file mode 100644 index 0000000..c568e73 --- /dev/null +++ b/posts/2014-02-06-hello-world.md @@ -0,0 +1,41 @@ +--- +layout: post +title: Hello, World (from Jekyll) +--- + +Here goes another migration of my sparse content from the past 8 years. This time, I'm giving up my Wordpress instance that I've migrated around 3 or 4 times (from wordpress.com, then Dreamhost, then Linode, then tortois.es), and completely failed to migrate this time (I neglected to back up Wordpress' MySQL tables). I still have an old XML backup, but it's such a crufty mess at this point that I'd rather start fresh and import in some old content. + +Wordpress is a fine platform and it produces some beautiful results. However, I feel like it is very heavy and complex for what I need, and I have gotten got myself into many train-wrecks and rabbit-holes trying to manage aspects of its layout and behavior and media handling. + +My nose is already buried in Emacs for most else that I write. It's the editor I work most quickly in. I'm already somewhat familiar with git. So, I am giving [Jekyll](http://jekyllrb.com/) a try. Having a static site pre-generated from Markdown just seems like it would fit my workflow better, and not require me to switch to a web-based editor. I'm going to have to learn some HTML and CSS anyway. + +(I phrase this as if it were a brilliant flash of insight on my part. No, it's something I started in July and then procrastinated on until now, when my Wordpress has been down for months.) + +A vaguely relevant [issue](https://github.com/joyent/smartos-live/issues/275) just steered me to the existence of [TRAMP](https://www.gnu.org/software/tramp/) which allows me to edit remote files in Emacs. I just did *C-x C-f* `/ssh:username@whatever.com:/home/username` from a stock Emacs installation, and now I'm happily editing this Markdown file, which is on my VPS, from my local Emacs. For some reason, I find this incredibly awesome, even though things like remote X, NX, RDP, and sshfs have been around for quite some time now. (When stuff starts screwing up, M-x tramp-clean-up-all-connection seems to help a bit.) + +I collect lots of notes and I enjoy writing and explaining, so why don't I maintain a blog where I actually post more often than once every 18 months? I don't really have a good answer. I just know that this crosses my mind about once a week. But okay, Steve Yegge, you get [your wish](https://sites.google.com/site/steveyegge2/you-should-write-blogs) but only because I found [what you wrote](https://sites.google.com/site/steveyegge2/tour-de-babel#TOC-C-) about C++ to be both funny and appropriate. + +Here's a script I was using to convert links from some other failed-Markdown-conversion from earlier: +{% highlight python %} +import re, sys + +def repl(m): + return "[%s](%s)" % (m.group(2), m.group(1)) + +urlRe = re.compile(r'([^<]+)') +for line in sys.stdin: + n = 1 + while (n > 0): (line, n) = urlRe.subn(repl, line) + sys.stdout.write(line) +{% endhighlight %} + +It mostly just turns HTML links to Markdown ones. Simple, but I find it useful. Someone else probably knows a Python one-liner to do it. Whatever. + +Test code stuff: +{% gist 8874941 %} + +To-do list items: +- Learn some freaking CSS. (Or [SASS](http://sass-lang.com/)?) +- Read [Building a Blog with Jekyll](http://flippinawesome.org/2013/10/28/building-a-blog-with-jekyll/) +- Install markdown mode for Emacs. +- Figure out how to sensibly get image thumbnails. (Or, [do it from Flickr](http://blog.pixarea.com/2012/07/fetch-images-from-flickr-to-show-in-octopress-slash-jekyll)? Or [here](http://www.marran.com/tech/integrating-flickr-and-jekyll/))