2022-09-04 12:46:09 -04:00

300 lines
18 KiB
Org Mode

---
title: "(post on procedural meshes needs a title)"
author: Chris Hodapp
date: "2021-07-27"
tags:
- procedural graphics
draft: true
---
# (TODO: a note to me, reading later: you don't need to give your entire
# life story here.)
# (TODO: pictures will make this post make a *lot* more sense, and it
# may need a lot of them)
[[https://www.contextfreeart.org/][Context Free]] is one of my favorite projects since I discovered it
about 2010. It's one I've [[../2011-08-29-context-free/][written about before]], played around in (see
some of the images below), presented on, as well as re-implemented
myself in different ways (see: [[https://github.com/hodapp87/contextual][Contextual]]). That is sometimes because
I wanted to do something Context Free couldn't, such as make it
realtime and interactive, and sometimes because implementing its
system of recursive grammars and replacement rules can be an excellent
way to learn things in a new language. (I think it's similar to
[[https://en.wikipedia.org/wiki/L-system][L-systems]], but I haven't yet learned those very well.)
# TODO: Set captions?
{{< gallery >}}
{{< figure page="images" resources="placeholder/dream20191115b.jpg" caption="Something">}}
{{< figure page="images" resources="placeholder/2011-11-25-electron.jpg">}}
{{< figure page="images" resources="portfolio/2011-11-04-crystal1.jpg">}}
{{< figure page="images" resources="placeholder/2011-11-03-feather2.jpg">}}
{{< figure page="images" resources="placeholder/2011-11-03-feather1.jpg">}}
{{< figure page="images" resources="portfolio/2011-09-09-conch.jpg">}}
{{< /gallery >}}
I've also played around in 3D graphics, particularly raytracing, since
about 1999 in PolyRay and POV-Ray... though my [[../../images/portfolio/1999-12-22-table.jpg][few surviving]] [[../../images/portfolio/1999-12-21-moo.jpg][renders
from 1999]] are mostly garbage. POV-Ray is probably what led me to
learn about things like procedural geometry and textures, especially
implicit surfaces and parametric surfaces, as its scene language is
full of constructs for that. Below is one of my procedural POV-Ray
scenes from experimenting back in 2005, and though I hadn't heard of
Context Free at that point (if it even existed) I was already trying
to do similar things in a sort of ad-hoc way.
{{< figure page="images" resources="portfolio/2005-08-23-shear6.jpg">}}
Naturally, this led me to wonder how I might extend Context Free's
model to work more generally with 3D geometry, and let me use it to
produce procedural geometry.
# ../2011-02-07-blender-from-a-recovering-pov-ray-user ?
[[http://structuresynth.sourceforge.net/index.php][Structure Synth]] of course already exists, and is a straightforward
generalization of Context Free's model to 3D (thank you to Mikael
Hvidtfeldt Christensen's blog [[http://blog.hvidtfeldts.net/][Syntopia]], another of my favorite things
ever, for introducing me to it awhile ago). See also [[https://kronpano.github.io/BrowserSynth/][BrowserSynth]].
However, at some point I realized they weren't exactly what I wanted.
Structure Synth lets you combine together 3D primitives to build up a
more complex scene - but doesn't try to properly handle any sort of
/joining/ of these primitives in a way that respects many of the
'rules' of geometry that are necessary for a lot of tools, like having
a well-defined inside/outside, not being self-intersecting, being
manifold, and so forth.
Here are a few images from an hour or two of my dabbling in Structure
Synth - one Blender screenshot, and two [[https://appleseedhq.net/][appleseed]] renders from when I
was trying to work with it:
{{< gallery >}}
{{< figure resources="structure-synth-mesh.png">}}
{{< figure page="images" resources="placeholder/appleseed_spiral_thing2.jpg">}}
{{< figure page="images" resources="placeholder/appleseed_spiral_thing.jpg">}}
{{< /gallery >}}
That's a "Hello World"-tier design I try out when something gives me
geometric transforms and recursion. The first image (the Blender one)
should show the bits of unconnected and half-connected and
self-intersecting geometry - that is what I wanted to work around.
You can look at this and say, "That really makes no difference, and
Structure Synth is capable of anything you /practically/ want to
create, but you're just searching for something to nitpick and
complain about so that you have a justification for why you reinvented
it badly," and you're probably more right than wrong, but you're also
still reading, so the joke's on you.
Tools like [[https://openscad.org/][OpenSCAD]], based on [[https://www.cgal.org/][CGAL]], handle the details of this, and I
suspect that [[https://www.opencascade.com/][Open CASCADE]] (thus [[https://www.freecadweb.org/][FreeCAD]]) also does. In CAD work, it's
crucial. Here's something similar I threw together in OpenSCAD with
the help of some automatically generated code:
{{< gallery >}}
{{< figure resources="openscad-mesh.png">}}
{{< figure resources="openscad-mesh2.png">}}
{{< /gallery >}}
In the second image you can see how it properly handled intersecting
geometry, and facetizing the curve I purposely stuck in there. The
mesh looks great, but I quickly ran into a problem: OpenSCAD scales
pretty poorly with this level of complexity - and as far as that goes,
this geometry is even fairly mild. This really isn't surprising, as
tools like this were made for practical applications in CAD, and not
so much for my silly explorations in procedural art.
But wait! *Implicit surfaces* handle almost all of this well! (Or see
any of the related-but-not-identical things around this, e.g. [[https://en.wikipedia.org/wiki/Function_representation][F-Reps]]
or distance bounds or distance fields or [[https://en.wikipedia.org/wiki/Signed_distance_function][SDFs]] or isosurfaces...) They
express [[https://en.wikipedia.org/wiki/Constructive_solid_geometry][CSG]] operations, they can be rendered directly on the GPU via
shaders, operations like blending shapes or twisting them are easy,
and when generalized to things like distance functions, they can be
used to render shapes like fractals that are infinitely complex and
lack an analytical formula for the surface, like the [[https://www.skytopia.com/project/fractal/mandelbulb.html][Mandelbulb]]. For
more on this, see [[http://blog.hvidtfeldts.net/][Syntopia]] again, or nearly anything by [[https://iquilezles.org/][Inigo Quilez]],
or look up raymarching and sphere tracing, or see [[https://ntopology.com/][nTopology]], or Matt
Keeter's work with [[https://www.libfive.com/][libfive]] and [[https://www.mattkeeter.com/research/mpr/][MPR]]. They're pure magic, they're
wonderfully elegant, and I'll probably have many other posts on them.
(TODO: Link to my CS6460 stuff)
Many renderers can render implicit surfaces directly. [[https://www.shadertoy.com/][Shadertoy]] is
full of user-created examples of ad-hoc realtime rendering of implicit
surfaces, mostly in the form of sphere tracers, done completely in
[[https://en.wikipedia.org/wiki/OpenGL_Shading_Language][GLSL]]. Keeter's work on [[https://www.mattkeeter.com/research/mpr/][MPR]] is all about realtime rendering of a
similar sort, but in a much more scalable way. The [[https://appleseedhq.net/][appleseed]] renderer
can do it via a [[https://github.com/appleseedhq/appleseed/blob/master/sandbox/examples/cpp/distancefieldobject/distancefieldobject.cpp][custom object via a plugin]]. POV-Ray, as mentioned
before, also handles them nicely with its [[https://www.povray.org/documentation/view/3.6.1/300/][Isosurface Object]]. That is
what I used below in yet another of my 2005 experiments:
{{< figure page="images" resources="portfolio/2005-07-05-spiral-isosurface2.jpg">}}
Many renderers don't handle implicit surfaces at all. Blender's
renderers, [[https://www.cycles-renderer.org/][Cycles]] and [[https://docs.blender.org/manual/en/latest/render/eevee/introduction.html][Eevee]], are among them. Using implicit surfaces
there means converting them to a form of geometry that Blender /can/
handle - typically a polygon mesh.
This leads to a pretty big issue: turning implicit surfaces to good
meshes for rendering /is a huge pain/. If you don't believe me,
believe Matt Keeter in [[https://www.mattkeeter.com/research/mpr/keeter_mpr20.pdf][his paper on MPR]] when he says, "There is
significant literature on converting implicit surfaces into meshes for
visualization. Having implemented many of these algorithms, we've
found it extremely difficult to make them robust." I'd love to tell
you that I saw this advice before wasting my time trying to turn
implicit surfaces to meshes, first with various libraries and then
with ad-hoc conversions and optimizations of my own, but I didn't.
For comparison, POV-Ray raytraced the above example comfortably on a
machine with 512 MB of RAM, and that's at 4000x3000 resolution - while
I've had very limited success at turning this particular implicit
surface to a polygon mesh good enough to produce anywhere near a
comparable render, and that fits in 32 GB of RAM.
I may have other posts talking about my failures here, but for now,
take it on faith: things like this are why I gave up trying to use
implicit surfaces for this project. (TODO: Make those posts.)
# TODO: Perhaps make a note here on explicit vs. implicit, maybe try
# to explain generative/procedural/algorithmic/parametric?
#
# https://en.wikipedia.org/wiki/Parametric_equation#Computer-aided_design -
# note explicit vs. implicit vs. parametric.
With these limitations in mind, around 2018 June I had started jotting
some ideas down. The gist is that I wanted to create
"correct-by-construction" meshes from these recursive grammars. By
that, I meant: incrementally producing the desired geometry as a mesh,
polygon-by-polygon, in such a way that guarantees that the resultant
mesh has the desired detail level, is a manifold surface, and that it
is otherwise a well-behaved mesh (e.g. no degenerate triangles, no
self-intersection, no high-degree vertices, no triangles of extreme
angles) - rather than attempting to patch up the mesh after its
creation, or subdividing it to the necessary detail level. For
something similar to what I mean (though I didn't have this in mind at
the start), consider the [[https://en.wikipedia.org/wiki/Marching_cubes][marching cubes]] algorithm, which is guaranteed
to produce closed, manifold meshes.
(TODO: Illustrate this somehow)
The form it took in my notes was in sort of "growing" or "extruding" a
mesh per these recursive rules, building in these guarantees (some of
them at least) by way of inductive steps.
My meandering path to implementing it went something like this:
- Wrote some very ad-hoc Python to generate a mesh of a parametric
conversion of my annoying spiral isosurface from 2005 by breaking it
into planar "slices" or "frames", which move along the geometry and
then are connected together at corresponding vertices. (TODO: Add
link to the automata_scratch repo, whatever it's renamed to)
- Explored [[https://github.com/thi-ng/geom][thi.ng/geom]] and pretty quickly gave up - but in the
process, discovered [[https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8103][Parallel Transport Approach to Curve Framing]].
- Implemented that paper in Python, reusing the basic model from my
prior code. (See [[https://github.com/Hodapp87/parallel_transport][parallel_transport]])
- Again continued with this model, allowing more arbitrary operations
than parallel frame transport, eventually integrating most of what I
wanted with the recursive grammars. (See
[[https://github.com/Hodapp87/automata_scratch/tree/master/python_extrude_meshgen][automata_scratch/python_extrude_meshgen]])
- Kept running into limitations in python_extrude_meshgen, and start
[[https://github.com/Hodapp87/prosha][Prosha]] in Rust - partly as a redesign/rewrite to avoid these
limitations, and partly because I just wanted to learn Rust.
- Realized that Rust is the wrong tool for the job, and rewrote
*again* in Python but with a rather different design and mindset.
(this is, of course, ignoring many other tangents with things like
shaders)
(TODO: Maybe split these off into sections for each one? That'd make
explanations/pictures easier.)
(TODO: The whole blog post is about my meandering path and should
probably include some Structure Synth things as part of this)
I put some serious effort into [[https://github.com/Hodapp87/prosha][Prosha]] and was conflicted on shelving
the project indefinitely, but the issues didn't look easily solvable.
Part of those issues were implementation issues with Rust - not that
Rust could have really done anything "better" here, but that it just
wasn't the right tool for what I was doing. In short, I had spent a
lot of that effort trying to badly and unintentionally implement/embed
a Lisp inside of Rust instead of just picking a Lispier language, or
perhaps using an embeddable Rust-based scripting language like [[https://github.com/koto-lang/koto][Koto]] or
[[https://github.com/rhaiscript/rhai][Rhai]]. I had ignored that many things that functional programming left
me very accustomed to - like first-class functions and closures - were
dependent on garbage collection. When I realized this and did a big
refactor to remove this entire layer of complexity, I was left with
very little "core" code - just a handful of library functions, and the
actual recursive rules for the geometry I was trying to generate.
That's good and bad: things were much simpler and vastly faster, but
also, it felt like I had wasted quite a lot of time and effort. I
have some more detailed notes on this in the Prosha repository.
Part of the issues also weren't Rust implementation issues - they were
deeper issues with my original "correct-by-construction" mesh idea
being half-broken. It half-worked: I was able to produce closed,
manifold meshes this way, and it could be tedious, but not *that*
difficult. However, all of my attempts to also produce "good" meshes
this way failed miserably.
(TODO: Can I find examples of this?)
The crux is that the recursive rules I used for generating geometry
(inspired heavily by those in Context Free) were inherently based
around discrete steps, generating discrete entities, like vertices,
edges, and face, and it made no sense to "partially" apply a rule,
especially if that rule involved some kind of branching - but I kept
trying to treat it as something continuous for the sake of being able
to "refine" the mesh to as fine of detail as I wanted. Further, I was
almost never consistent with the nature of this continuity: sometimes
I wanted to treat it like a parametric curve (one continuous
parameter), sometimes I wanted to treat it like a parametric surface
(two continuous parameters), sometimes I wanted to treat it like an
implicit surface (with... theoretically two continuous parameters,
just not explicit ones?). It was a mess, and it's part of why my
Prosha repository is a graveyard of branches.
The recursive rules were still excellent at expressing arbitrarily
complex, branching geometry - and I really wanted to keep this basic
model around somehow. After some reflection, I believed that the only
way to do this was to completely separate the process of meshing
(refinement, subdivision, facetization...) from the recursive rules.
This would have been obvious if I read the guides from [[https://graphics.pixar.com/opensubdiv/overview.html][OpenSubdiv]]
instead of reimplementing it badly. Their [[https://graphics.pixar.com/opensubdiv/docs/subdivision_surfaces.html][subdivision surface]]
documentation covers a lot, but I found it incredibly clear and
readable. Once I understood how OpenSubdiv was meant to be used, it
made a lot of sense: I shouldn't be trying to generate the "final"
mesh, I should be generating a mesh as the /control cage/, which
guides the final mesh. Further, I didn't even need to bother with
OpenSubdiv's C++ API, I just needed to get the geometry into Blender,
and Blender would handle the subdivision on-demand via OpenSubdiv.
# TODO: This definitely needs examples of a control cage, and of edge
# creases
One minor issue is that this control cage isn't just a triangle mesh,
but a triangle mesh plus edge creases. I needed a way to get this
data into Blender. However, the only format Blender can read edge
creases from is [[http://www.alembic.io/][Alembic]]. Annoyingly, its [[http://docs.alembic.io/reference/index.html#alembic-intro][documentation]] is almost
completely nonexistent, the [[https://alembic.github.io/cask/][Cask]] bindings still have spotty Python 3.x
support, and when I tried to run their example code to produce some
files, and Blender was crashing when importing them.... and this is
all a yak to shave another day. I instead generated the mesh data
directly in Blender (via its Python interpreter), added it to the
scene, and then set its creases via its Python API.
After the aforementioned refactor in Prosha, I was able to quickly
translate the Rust code for most of my examples into Python code with
the help of some library code I'd accumulated from the past projects.
Debugging this mostly inside Blender also made the process vastly
faster. Further, because I was letting Blender handle all of the
heavy lifting with mesh processing (and it in turn was using things
like OpenSubdiv), the extra overhead of Python compared to Rust didn't
matter - I was handling so much less data because I was generating
only a control cage, not a full mesh.
I'm still a little stuck at how to build higher 'geometric'
abstractions here and compose them. I have felt like most of the
model couples me tightly to low-level mesh constructs - while Context
Free and Structure Synth definitely don't have this problem. This is
particularly annoying because a lot of the power of these recursive
grammars comes from their ability to be abstracted away and composed.
# (TODO: Show some examples)