Updated isolated-pixel-pushing post

This commit is contained in:
Chris Hodapp 2016-06-04 22:54:42 -04:00
parent 30351a5c34
commit 475f7d1a4e
2 changed files with 107 additions and 17 deletions

BIN
images/acidity-standard.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

View File

@ -1,33 +1,123 @@
--- ---
layout: post
title: Isolated-pixel-pushing title: Isolated-pixel-pushing
tags: CG, Project, Technobabble tags: CG, Project, Technobabble
status: publish date: August 27, 2011
type: post author: Chris Hodapp
published: true
--- ---
After finally deciding to look around for some projects on github, I found a number of very interesting ones in a matter of minutes.
I found [Fragmentarium](http://syntopia.github.com/Fragmentarium/index.html) first. This program is like something I tried for years and years to write, but just never got around to putting in any real finished form. It can act as a simple testbench for GLSL fragment shaders, which I'd already realized could be used to do exactly what I was doing more slowly in [Processing](http://processing.org/), much more slowly in Python (stuff like [this](http://mershell.deviantart.com/gallery/#/dckzex) if we want to dig up things from 6 years ago), much more clunkily in C and [OpenFrameworks](http://www.openframeworks.cc/), and so on. It took me probably about 30 minutes to put together the code to generate the usual gawdy test algorithm I try when bootstrapping from a new environment: After finally deciding to look around for some projects on GitHub, I
found a number of very interesting ones in a matter of minutes.
![Standard trippy image]({{ site.baseurl }}/assets/isolated_pixel_pushing/acidity-standard.png) I found
[Fragmentarium](http://syntopia.github.com/Fragmentarium/index.html)
first. This program is like something I tried for years and years to
write, but just never got around to putting in any real finished
form. It can act as a simple testbench for GLSL fragment shaders,
which I'd already realized could be used to do exactly what I was
doing more slowly in [Processing](http://processing.org/), much more
slowly in Python (stuff like
[this](http://mershell.deviantart.com/gallery/#/dckzex) if we want to
dig up things from 6 years ago), much more clunkily in C and
[OpenFrameworks](http://www.openframeworks.cc/), and so on. It took me
probably about 30 minutes to put together the code to generate the
usual gawdy test algorithm I try when bootstrapping from a new
environment:
(Yeah, it's gaudy. But when you see it animated, it's amazingly trippy and mesmerizing.) [![Standard trippy image](../images/acidity-standard.png){width=100%}](../images/acidity-standard.png)
The use I'm talking about (and that I've reimplemented a dozen times) was just writing functions that map the 2D plane to some colorspace, often with some spatial continuity. Typically I'll have some other parameters in there that I'll bind to a time variable or some user control to animate things. So far I don't know any particular term that encompasses functions like this, but I know people have used it in different forms for a long while. It's the basis of procedural texturing (as pioneered in [An image synthesizer](http://portal.acm.org/citation.cfm?id=325247) by Ken Perlin) as implemented in countless different forms like [Nvidia Cg](http://developer.nvidia.com/cg-toolkit), GLSL, probably Renderman Shading Language, RTSL, POV-Ray's extensive texturing, and Blender's node texturing system (which I'm sure took after a dozen other similar systems). [Adobe Pixel Bender](http://www.adobe.com/devnet/pixelbender.html), which the Fragmentarium page introduced to me for the first time, does something pretty similar but to different ends. Some systems such as [Vvvv](http://www.vvvv.org/) and [Quartz Composer](http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html) probably permit some similar operations; I don't know for sure. (Yeah, it's gaudy. But when you see it animated, it's amazingly trippy
and mesmerizing.)
The benefits of representing a texture (or whatever image) as an algorithm rather than a raster image are pretty well-known: It's a much smaller representation, it scales pretty well to 3 or more dimensions (particularly with noise functions like Perlin Noise or Simplex Noise), it can have a near-unlimited level of detail, it makes things like seams and antialiasing much less of an issue, it is almost the ideal case for parallel computation and modern graphics hardware has built-in support for it (e.g. GLSL, Cg, to some extent OpenCL). The drawback is that you usually have to find some way to represent this as a function in which each pixel or texel (or voxel?) is computed in isolation of all the others. This might be clumsy, it might be horrendously slow, or it might not have any good representation in this form. The use I'm talking about (and that I've reimplemented a dozen times)
was just writing functions that map the 2D plane to some colorspace,
often with some spatial continuity. Typically I'll have some other
parameters in there that I'll bind to a time variable or some user
control to animate things. So far I don't know any particular term
that encompasses functions like this, but I know people have used it
in different forms for a long while. It's the basis of procedural
texturing (as pioneered in
[An image synthesizer](http://portal.acm.org/citation.cfm?id=325247)
by Ken Perlin) as implemented in countless different forms like
[Nvidia Cg](http://developer.nvidia.com/cg-toolkit), GLSL, probably
Renderman Shading Language, RTSL, POV-Ray's extensive texturing, and
Blender's node texturing system (which I'm sure took after a dozen
other similar
systems). [Adobe Pixel Bender](http://www.adobe.com/devnet/pixelbender.html),
which the Fragmentarium page introduced to me for the first time, does
something pretty similar but to different ends. Some systems such as
[Vvvv](http://www.vvvv.org/) and
[Quartz Composer](http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html)
probably permit some similar operations; I don't know for sure.
Also, once it's an algorithm, you can *parametrize it*. If you can make it render near realtime, then animation and realtime user control follow almost for free from this, but even without that, you still have a lot of flexibility when you can change parameters. The benefits of representing a texture (or whatever image) as an
algorithm rather than a raster image are pretty well-known: It's a
much smaller representation, it scales pretty well to 3 or more
dimensions (particularly with noise functions like Perlin Noise or
Simplex Noise), it can have a near-unlimited level of detail, it makes
things like seams and antialiasing much less of an issue, it is almost
the ideal case for parallel computation and modern graphics hardware
has built-in support for it (e.g. GLSL, Cg, to some extent
OpenCL). The drawback is that you usually have to find some way to
represent this as a function in which each pixel or texel (or voxel?)
is computed in isolation of all the others. This might be clumsy, it
might be horrendously slow, or it might not have any good
representation in this form.
The only thing different (and debatably so) that I'm doing is trying to make compositions with just the functions themselves rather than using them as means to a different end, like video processing effects or texturing in a 3D scene. It also fascinated me to see these same functions animated in realtime. Also, once it's an algorithm, you can *parametrize it*. If you can
make it render near realtime, then animation and realtime user control
follow almost for free from this, but even without that, you still
have a lot of flexibility when you can change parameters.
However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen) is doing much more interesting things with the program (i.e. rendering 3D fractals with distance estimation) than I would ever have considered doing. It makes sense why - his emerged more from the context of fractals and ray tracers on the GPU, like [Amazing Boxplorer](http://sourceforge.net/projects/boxplorer/), and fractals tend to make for very interesting results. The only thing different (and debatably so) that I'm doing is trying
to make compositions with just the functions themselves rather than
using them as means to a different end, like video processing effects
or texturing in a 3D scene. It also fascinated me to see these same
functions animated in realtime.
His [Syntopia Blog](http://blog.hvidtfeldts.net/) has some fascinating material and beautiful renders on it. His posts on [Distance Estimated 3D Fractals](http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/) were particularly fascinating to me - in part because this was the first time I had encountered the technique of distance estimation for rendering a scene. He gave a good introduction with lots of other material to refer to. However, the author of Fragmentarium (Mikael Hvidtfeldt Christensen)
is doing much more interesting things with the program (i.e. rendering
3D fractals with distance estimation) than I would ever have
considered doing. It makes sense why - his emerged more from the
context of fractals and ray tracers on the GPU, like
[Amazing Boxplorer](http://sourceforge.net/projects/boxplorer/), and
fractals tend to make for very interesting results.
Distance Estimation blows my mind a little when I try to understand it. I have a decent high-level understanding of ray tracing, but this is not ray tracing, it's ray marching. It lets complexity be emergent rather than needing an explicit representation as a scanline renderer or ray tracer might require (while ray tracers will gladly take a functional representation of many geometric primitives, I have encountered very few cases where something like a complex fractal or an isosurface could be rendered without first approximating it as a mesh or some other shape, sometimes at great cost). Part 1 of Mikael's series on Distance Estimated 3D Fractals links to [these slides](http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf) which show a 4K demo built piece-by-piece using distance estimation to render a pretty complex scene. His [Syntopia Blog](http://blog.hvidtfeldts.net/) has some fascinating
material and beautiful renders on it. His posts on
[Distance Estimated 3D Fractals](http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/)
were particularly fascinating to me - in part because this was the
first time I had encountered the technique of distance estimation for
rendering a scene. He gave a good introduction with lots of other
material to refer to.
*(Later addition: [This link](http://www.mazapan.se/news/2010/07/15/gpu-ray-marching-with-distance-fields/) covers ray marching for some less fractalian uses. "Hypertexture" by Ken Perlin gives some useful information too, more technical in nature; finding this paper is up to you. Consult your favorite university?)* Distance Estimation blows my mind a little when I try to understand
it. I have a decent high-level understanding of ray tracing, but this
is not ray tracing, it's ray marching. It lets complexity be emergent
rather than needing an explicit representation as a scanline renderer
or ray tracer might require (while ray tracers will gladly take a
functional representation of many geometric primitives, I have
encountered very few cases where something like a complex fractal or
an isosurface could be rendered without first approximating it as a
mesh or some other shape, sometimes at great cost). Part 1 of Mikael's
series on Distance Estimated 3D Fractals links to
[these slides](http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf)
which show a 4K demo built piece-by-piece using distance estimation to
render a pretty complex scene.
He has another rather different program called [Structure Synth](http://structuresynth.sourceforge.net/) which he made following the same "design grammar" approach of [Context Free](http://www.contextfreeart.org/). I haven't used Structure Synth yet, because Context Free was also new to me and I was first spending some time learning to use that. I'll cover this in another post. *(Later addition:
[This link](http://www.mazapan.se/news/2010/07/15/gpu-ray-marching-with-distance-fields/)
covers ray marching for some less fractalian uses. "Hypertexture" by
Ken Perlin gives some useful information too, more technical in
nature; finding this paper is up to you. Consult your favorite
university?)*
He has another rather different program called
[Structure Synth](http://structuresynth.sourceforge.net/) which he
made following the same "design grammar" approach of
[Context Free](http://www.contextfreeart.org/). I haven't used
Structure Synth yet, because Context Free was also new to me and I was
first spending some time learning to use that. I'll cover this in
another post.
*(Even later note: With [Shadertoy](https://www.shadertoy.com/) some
folks have implemented the same in WebGL.)*