Compare commits

..

10 Commits

Author SHA1 Message Date
Chris Hodapp
2dc86d23bf Add note to front of repo after moving things 2021-07-21 18:05:17 -04:00
Chris Hodapp
31e27846cf Move older Python code into python_cage_meshgen 2021-07-21 18:02:08 -04:00
Chris Hodapp
a4ddcf4941 Update gitignore 2021-07-21 17:45:33 -04:00
Chris Hodapp
003632209e Add some more notes 2021-07-17 12:34:36 -04:00
Chris Hodapp
af725d586c Commit libfive-subdiv-fail code 2021-07-17 10:51:56 -04:00
Chris Hodapp
070b1e46b8 tree_thing - fix non-manifold issue (I think) 2021-07-06 18:52:41 -04:00
Chris Hodapp
603acc8832 Partially fix tree_thing example 2021-07-05 22:09:17 -04:00
Chris Hodapp
a9f8b24117 Incomplete-but-mostly-right tree_thing conversion 2021-07-05 19:05:46 -04:00
Chris Hodapp
1cdd45fccb Add more Python scrap code from Blender 2021-07-04 13:16:20 -04:00
Chris Hodapp
3f86ec5613 Expose some creases in barbs.py 2021-07-04 08:01:50 -04:00
15 changed files with 538 additions and 83 deletions

2
.gitignore vendored
View File

@ -1,4 +1,4 @@
*~ *~
#*# #*#
__pycache__/* __pycache__/
.ipynb_checkpoints/* .ipynb_checkpoints/*

113
README.md
View File

@ -1,84 +1,33 @@
# To-do items, wanted features, bugs: # automata_scratch
## Cool This is repo has a few projects that are related in terms of
- More complicated: Examples of *merging*. I'm not sure on the theory high-level goal, but almost completely unrelated in their descent.
behind this.
## Annoying/boring - `python_extrude_meshgen` is some Python code from around 2019
- https://en.wikipedia.org/wiki/Polygon_triangulation - do this to September which did a sort of extrusion-based code generation.
fix my wave example! While this had some good results and some good ideas, the basic
- http://www.polygontriangulation.com/2018/07/triangulation-algorithm.html model was too limited in terms of the topology it could express.
- Clean up examples.ram_horn_branch(). The way I clean it up might - `libfive_subdiv` is a short project around 2021 July attempting to
help inform some cleaner designs. use the Python bindings of [libfive](https://www.libfive.com/), and
- I really need to standardize some of the behavior of fundamental automatic differentiation in
operations (with regard to things like sizes they generate). This [autograd](https://github.com/HIPS/autograd), to turn implicit
is behavior that, if it changes, will change a lot of things that I'm surfaces to meshes which were suitable for subdivision via something
trying to keep consistent so that my examples still work. like
- Winding order. It is consistent through seemingly [OpenSubdiv](https://graphics.pixar.com/opensubdiv/overview.html)
everything, except for reflection and close_boundary_simple. (in turn so that I could render with them without having to use
(When there are two parallel boundaries joined with something like insane numbers of triangles or somehow hide the obvious errors in
join_boundary_simple, traversing these boundaries in their actual order the geometry). Briefly, the process was to use edges with crease
to generate triangles - like in close_boundary_simple - will produce weights which were set based on the curvature of the implicit
opposite winding order on each. Imagine a transparent clock: seen from the surface. While I accomplished this process, it didn't fulfill the
front, it moves clockwise, but seen from the back, it moves goal. Shortly thereafter, I was re-reading
counter-clockwise.) [Massively Parallel Rendering of Complex Closed-Form Implicit Surfaces](https://www.mattkeeter.com/research/mpr/) - which, like libfive, is by Matt Keeter -
- File that bug that I've seen in trimesh/three.js and found a section I'd ignored on the difficulties of producing
(see trimesh_fail.ipynb) good meshes from isosurfaces for the sake of rendering. I kept
- Why do I get the weird zig-zag pattern on the triangles, the code around because I figured it would be useful to refer to
despite larger numbers of them? Is it something in how I later, particularly for the integration with Blender.
twist the frames? - `blender_scraps` contains some scraps of Python code meant to be
- How can I compute the *torsion* on a quad? I think it used inside of Blender's Python scripting - and it contains some
comes down to this: torsion applied across the quad I'm conversions from another project, Prosha, for procedural mesh
triangulating leading to neither diagonal being a generation in Rust (itself based on learnings from
particularly good choice. Subdividing the boundary seems `python_extrude_meshgen`). These examples were proof-of-concept of
to help, but other triangulation methods (e.g. turning a generating meshes as control cages rather than as "final" meshes.
quad to 4 triangles by adding the centroid) could be good
too.
- Facets/edges are just oriented the wrong way...
- Picking at random the diagonal on the quad to triangulate with
does seem to turn 'error' just to noise, and in its own way this
is preferable.
- Integrate parallel_transport work and reuse what I can
- /mnt/dev/graphics_misc/isosurfaces_2018_2019 - perhaps include my
spiral isosurface stuff from here
## Abstractions
- This has a lot of functions parametrized over a lot
of functions. Need to work with this somehow. (e.g. should
it subdivide this boundary? should it merge opening/closing
boundaries?)
- Some generators produce boundaries that can be directly merged
and produce sensible geometry. Some generators produce
boundaries that are only usable when they are further
transformed (and would produce degenerate geometry). What sort
of nomenclature captures this?
- How can I capture the idea of a group of parameters which, if
they are all scaled in the correct way (some linearly, others
inversely perhaps), generated geometry that is more or less
identical except that it is higher-resolution?
- Use mixins to extend 3D transformations to things (matrices,
cages, meshes, existing transformations)
- I can transform a Cage. Why not a CageGen?
## ????
- Embed this in Blender?
## Future thoughts
- What if I had a function that could generate a Cage as if
from a parametric formula and smoothly vary its orientation?
My existing tools could easily turn this to a mesh. If I could vary
the detail of the Cage itself (if needed), then I could also
generate a mesh at an arbitrary level of detail simply by sampling at
finer and finer points on the parameter space. (This might also tie
into the Parallel Transport work.)
- What are the limitations of using Cages?
- Current system is very "generative". Could I do basically L-system
if I have rules for how a much is *refined*? What about IFS?
- Do this in Rust once I understand WTF I am doing
## Other thoughts
- Why do I never use the term "extruding" to describe what I'm doing?

View File

@ -47,6 +47,8 @@ class Barbs(object):
# be converted last-minute to tuples. (Why: we need to refer # be converted last-minute to tuples. (Why: we need to refer
# to prior vertices and arithmetic is much easier from an # to prior vertices and arithmetic is much easier from an
# array, but Blender eventually needs tuples.) # array, but Blender eventually needs tuples.)
self.creases_side = set()
self.creases_joint = set()
def run(self, iters) -> (list, list): def run(self, iters) -> (list, list):
# 'iters' is ignored for now # 'iters' is ignored for now
@ -87,6 +89,10 @@ class Barbs(object):
[bound[2], bound[3], a0 + 3, a0 + 2]) [bound[2], bound[3], a0 + 3, a0 + 2])
self.barb(iters - 1, xform.compose(self.sides[3]), self.barb(iters - 1, xform.compose(self.sides[3]),
[bound[3], bound[0], a0 + 0, a0 + 3]) [bound[3], bound[0], a0 + 0, a0 + 3])
for i in range(4):
j = (i + 1) % 4
self.creases_joint.add((a0 + i, a0 + j))
self.creases_side.add((bound[i], a0 + i))
def barb(self, iters, xform, bound): def barb(self, iters, xform, bound):
if self.limit_check(xform, iters): if self.limit_check(xform, iters):

79
blender_scraps/scratch.py Normal file
View File

@ -0,0 +1,79 @@
# This is a pile of patterns and snippets I've used in Blender in the
# past; it isn't meant to be run on its own.
import bmesh
import bpy
# Here is a good starting place (re: creating geometry):
# https://docs.blender.org/api/current/info_gotcha.html#n-gons-and-tessellation
# https://wiki.blender.org/wiki/Source/Modeling/BMesh/Design
# Current object's data (must be selected):
me = bpy.context.object.data
def bmesh_set_creases(obj, vert_pairs, crease_val):
# Walk through the edges in 'obj'. For those *undirected* edges in
# 'vert_pairs' (a set of (vi, vj) tuples, where vi and vj are vertex
# indices, and tuple order is irrelevant), set the crease to 'crease_val'.
bm = bmesh.new()
bm.from_mesh(obj)
creaseLayer = bm.edges.layers.crease.verify()
for e in bm.edges:
idxs = tuple([v.index for v in e.verts])
print(idxs)
if idxs in vert_pairs or idxs[::-1] in vert_pairs:
e[creaseLayer] = crease_val
bm.to_mesh(obj)
bm.free()
# My bpy.types.MeshPolygon objects:
for i,poly in enumerate(me.polygons):
t = type(poly)
#print(f"poly {i}: {t}")
verts = list(poly.vertices)
print(f"poly {poly.index}: vertices={verts}")
#s = poly.loop_start
#n = poly.loop_total
#print(f" loop_start={s} loop_total={n}")
#v = [l.vertex_index for l in me.loops[s:(s+n)]]
#print(f" loop: {v}")
# Vector type:
v2 = [me.vertices[i].co for i in verts]
print(f" verts: {v2}")
# Yes, this works:
#for i in verts:
# me.vertices[i].co.x -= 1.0
# Pattern for loading external module from Blender's Python (and
# reloading as necessary):
import sys
ext_path = "/home/hodapp/source/automata_scratch/blender_scraps"
if ext_path not in sys.path:
sys.path.append(ext_path)
import whatever
whatever = importlib.reload(whatever)
# Note that if 'whatever' itself imports modules that may have changed
# since the last import, you may need to do this same importlib
# incantation!
# Crease access - but the wrong way to change them:
for edge in me.edges:
v = list(edge.vertices)
print(f"edge {edge.index}: crease={edge.crease} vertices={v}")
#edge.crease = 0.7
# Creating a mesh with vertices & faces in Python via bpy with:
# v - list of (x, y, z) tuples
# f - list of (v0, v1, v2...) tuples, each with face's vertex indices
mesh = bpy.data.meshes.new('mesh_thing')
mesh.from_pydata(v, [], f)
mesh.update(calc_edges=True)
# set creases beforehand:
#bmesh_set_creases(mesh, b.creases_joint, 0.7)
obj = bpy.data.objects.new('obj_thing', mesh)
# set obj's transform matrix:
#obj.matrix_world = Matrix(...)
# also acceptable to set creases:
#bmesh_set_creases(obj.data, b.creases_joint, 0.7)
bpy.context.scene.collection.objects.link(obj)

View File

@ -0,0 +1,163 @@
# Hasty conversion from the Rust in prosha/src/examples.rs & Barbs
# This is mostly right, except:
# - It doesn't yet do creases.
import numpy as np
import xform
# Mnemonics:
X = np.array([1.0, 0.0, 0.0])
Y = np.array([0.0, 1.0, 0.0])
Z = np.array([0.0, 0.0, 1.0])
class TreeThing(object):
def __init__(self, f: float=0.6, depth: int=10, scale_min: float=0.02):
self.scale_min = scale_min
v = np.array([-1.0, 0.0, 1.0])
v /= np.linalg.norm(v)
self.incr = (xform.Transform().
translate(0, 0, 0.9*f).
rotate(v, 0.4*f).
scale(1.0 - (1.0 - 0.95)*f))
# 'Base' vertices, used throughout:
self.base = np.array([
[-0.5, -0.5, 0.0],
[-0.5, 0.5, 0.0],
[ 0.5, 0.5, 0.0],
[ 0.5, -0.5, 0.0],
])
# 'Transition' vertices:
self.trans = np.array([
# Top edge midpoints:
[-0.5, 0.0, 0.0], # 0 - connects b0-b1
[ 0.0, 0.5, 0.0], # 2 - connects b2-b3
[ 0.5, 0.0, 0.0], # 1 - connects b1-b2
[ 0.0, -0.5, 0.0], # 3 - connects b3-b0
# Top middle:
[ 0.0, 0.0, 0.0], # 4 - midpoint/centroid of all
])
# splits[i] gives transformation from a 'base' layer to the
# i'th split (0 to 3):
self.splits = [
xform.Transform().
rotate(Z, np.pi/2 * i).
translate(0.25, 0.25, 0.0).
scale(0.5)
for i in range(4)
]
# Face & vertex accumulators:
self.faces = []
# self.faces will be a list of tuples (each one of length 4
# and containing indices into self.verts)
self.verts = []
# self.verts will be a list of np.array of shape (3,) but will
# be converted last-minute to tuples. (Why: we need to refer
# to prior vertices and arithmetic is much easier from an
# array, but Blender eventually needs tuples.)
self.creases_side = set()
self.creases_joint = set()
self.depth = depth
def run(self):
self.verts.extend(self.base)
self.faces.append((0, 1, 2, 3))
self.child(xform.Transform(), self.depth, [0, 1, 2, 3])
verts = [tuple(v) for v in self.verts]
faces = [tuple(f) for f in self.faces]
return verts, faces
def trunk(self, xf: xform.Transform, b):
if self.limit_check(xf):
# Note opposite winding order
verts = [b[i] for i in [3,2,1,0]]
self.faces.append(verts)
return
incr = (xform.Transform().
translate(0.0, 0.0, 1.0).
rotate(Z, 0.15).
rotate(X, 0.1).
scale(0.95))
sides = [
xform.Transform().
rotate(Z, -np.pi/2 * i).
rotate(Y, -np.pi/2).
translate(0.5, 0.0, 0.5)
for i in range(4)
]
xf2 = xf.compose(incr)
g = xf2.apply_to(self.base)
a0 = len(self.verts)
self.verts.extend(g)
# TODO: Turn this to a cleaner loop?
self.main(iters - 1, xf2, [a0, a0 + 1, a0 + 2, a0 + 3])
self.child(iters - 1, xf.compose(self.sides[0]),
[b[0], b[1], a0 + 1, a0 + 0])
self.child(iters - 1, xf.compose(self.sides[1]),
[b[1], b[2], a0 + 2, a0 + 1])
self.child(iters - 1, xf.compose(self.sides[2]),
[b[2], b[3], a0 + 3, a0 + 2])
self.child(iters - 1, xf.compose(self.sides[3]),
[b[3], b[0], a0 + 0, a0 + 3])
def limit_check(self, xf: xform.Transform) -> bool:
# Assume all scales are the same (for now)
sx,_,_ = xf.get_scale()
return sx < self.scale_min
def child(self, xf: xform.Transform, depth, b):
if self.limit_check(xf):
# Note opposite winding order
verts = [b[i] for i in [3,2,1,0]]
self.faces.append(verts)
return
xf2 = xf.compose(self.incr)
if depth > 0:
# Just recurse on the current path:
n0 = len(self.verts)
self.verts.extend(xf2.apply_to(self.base))
# Connect parallel faces:
n = len(self.base)
for i, b0 in enumerate(b):
j = (i + 1) % n
b1 = b[j]
a0 = n0 + i
a1 = n0 + j
self.faces.append((a0, a1, b1, b0))
self.child(xf2, depth - 1, [n0, n0 + 1, n0 + 2, n0 + 3]);
else:
n = len(self.verts)
self.verts.extend(xf2.apply_to(self.base))
m01 = len(self.verts)
self.verts.extend(xf2.apply_to(self.trans))
m12, m23, m30, c = m01 + 1, m01 + 2, m01 + 3, m01 + 4
self.faces.extend([
# two faces straddling edge from vertex 0:
(b[0], n+0, m01),
(b[0], m30, n+0),
# two faces straddling edge from vertex 1:
(b[1], n+1, m12),
(b[1], m01, n+1),
# two faces straddling edge from vertex 2:
(b[2], n+2, m23),
(b[2], m12, n+2),
# two faces straddling edge from vertex 3:
(b[3], n+3, m30),
(b[3], m23, n+3),
# four faces from edge (0,1), (1,2), (2,3), (3,0):
(b[0], m01, b[1]),
(b[1], m12, b[2]),
(b[2], m23, b[3]),
(b[3], m30, b[0]),
])
self.child(xf2.compose(self.splits[0]), self.depth, [c, m12, n+2, m23]);
self.child(xf2.compose(self.splits[1]), self.depth, [c, m01, n+1, m12]);
self.child(xf2.compose(self.splits[2]), self.depth, [c, m30, n+0, m01]);
self.child(xf2.compose(self.splits[3]), self.depth, [c, m23, n+3, m30]);

174
libfive_subdiv/test_subdiv.py Executable file
View File

@ -0,0 +1,174 @@
#!/usr/bin/env python3
# Chris Hodapp, 2021-07-17
#
# This code is: yet another attempt at producing better meshes from
# implicit surfaces / isosurfaces. My paper notes from around the
# same time period describe some more of why and how.
#
# This depends on the Python bindings for libfive (circa revision
# 601730dc), on numpy, and on autograd from
# https://github.com/HIPS/autograd for automatic differentiation.
#
# For an implicit surface expressed in a Python function, it:
# - uses libfive to generate a mesh for this implicit surface,
# - dumps this face-vertex data (numpy arrays) to disk in a form Blender
# can load pretty easily, (this is done only because exporting and
# loading an STL resulted in vertex and face indices being out of sync
# for some reason, perhaps libfive's meshing having randomness.)
# - iterates over each edge from libfive's mesh data,
# - for that edge, computes the curvature of the surface perpendicular
# to that edge,
# - saves this curvature away in another file Blender can load.
#
# There are then some Blender routines for its Python API which load
# the mesh, load the curvatures, and then try to turn these per-edge
# curvature values to edge crease weights. The hope was that this
# would allow subdivision to work effectively on the resultant mesh in
# sharper (higher-curvature) areas - lower crease weights should fit
# lower-curvature areas better, and higher crease weights should keep
# a sharper edge from being dulled too much by subdivision.
#
# I tried with spiral_implicit, my same spiral isosurface function
# from 2005 June yet again, as the implicit surface, but also yet
# again, it proved a very difficult surface to work with.
# Below is some elisp so that I can use the right environment in Emacs
# and elpy:
#
# (setq python-shell-interpreter "nix-shell" python-shell-interpreter-args " -I nixpkgs=/home/hodapp/nixpkgs -p python3Packages.libfive python3Packages.autograd python3Packages.numpy --command \"python3 -i\"")
# This is a kludge to ensure libfive's bindings can be found:
#import os, sys
#os.environ["LIBFIVE_FRAMEWORK_DIR"]="/nix/store/gcxmz71b4i6bmsb1alafr4cqdnl19dn5-libfive-unstable-e93fef9d/lib/"
#sys.path.insert(0, "/nix/store/gcxmz71b4i6bmsb1alafr4cqdnl19dn5-libfive-unstable-e93fef9d/lib/python3.8/site-packages/")
import autograd.numpy as np
from autograd import grad, elementwise_grad as egrad
from libfive.shape import shape
# The implicit surface is below. It returns two functions that
# compute the same thing: a vectorized version (f) that can handle
# array inputs with (x,y,z) rows, and a version (g) that can also
# handle individual x,y,z. f is needed for autograd, g is needed for
# libfive.
def spiral_implicit(outer, inner, freq, phase, thresh):
def g(x,y,z):
d1 = outer*y - inner*np.sin(freq*x + phase)
d2 = outer*z - inner*np.cos(freq*x + phase)
return d1*d1 + d2*d2 - thresh*thresh
def f(pt):
x,y,z = [pt[..., i] for i in range(3)]
return g(x,y,z)
return f, g
def any_perpendicular(vecs):
# For 'vecs' of shape (..., 3), this returns an array of shape
# (..., 3) in which every corresponding vector is perpendicular
# (but nonzero). 'vecs' does not need to be normalized, and the
# returned vectors are not normalized.
x,y,z = [vecs[..., i] for i in range(3)]
a0 = np.zeros_like(x)
# The condition has the extra dimension added to make it (..., 1)
# so it broadcasts properly with the branches, which are (..., 3):
p = np.where((np.abs(z) < np.abs(x))[...,None],
np.stack((y, -x, a0), axis=-1),
np.stack((a0, -z, y), axis=-1))
return p
def intersect_implicit(surface_fn):
# surface_fn(x,y,z)=0 is an implicit surface. This returns a
# function f(s, t, pt, u, v) which - for f(s,t,...) = 0 is the
# implicit curve created by intersecting the surface with a plane
# passing through point 'pt' and with two perpendicular unit
# vectors 'u' and 'v' that lie on the plane.
def g(pts_2d, pt_center, u, v, **kw):
s,t = [pts_2d[..., i, None] for i in range(2)]
pt_3d = pt_center + s*u + t*v
return surface_fn(pt_3d, **kw)
return g
def implicit_curvature_2d(curve_fn):
# Returns a function which computes curvature of an implicit
# curve, curve_fn(s,t)=0. The resultant function takes two
# arguments as well.
#
# First derivatives:
_g1 = egrad(curve_fn)
# Second derivatives:
_g2s = egrad(lambda *a, **kw: _g1(*a, **kw)[...,0])
_g2t = egrad(lambda *a, **kw: _g1(*a, **kw)[...,1])
# Doing 'egrad' twice doesn't have the intended effect, so here I
# split up the first derivative manually.
def f(st, **kw):
g1 = _g1(st, **kw)
g2s = _g2s(st, **kw)
g2t = _g2t(st, **kw)
ds = g1[..., 0]
dt = g1[..., 1]
dss = g2s[..., 0]
dst = g2s[..., 1]
dtt = g2t[..., 1]
return (-dt*dt*dss + 2*ds*dt*dst - ds*ds*dtt) / ((ds*ds + dt*dt)**(3/2))
return f
f_arr, f = spiral_implicit(2.0, 0.4, 20.0, 0.0, 0.3)
fs = shape(f)
print(fs)
kw={
"xyz_min": (-0.5, -0.5, -0.5),
"xyz_max": (0.5, 0.5, 0.5),
"resolution": 20,
}
# To save directly as STL:
# fs.save_stl("spiral.stl", **kw)
print(f"letting libfive generate mesh...")
verts, tris = fs.get_mesh(**kw)
verts = np.array(verts, dtype=np.float32)
tris = np.array(tris, dtype=np.uint32)
print(f"Saving {len(verts)} vertices, {len(tris)} faces")
np.save("spiral_verts.npy", verts)
np.save("spiral_tris.npy", tris)
print(f"Computing curvatures...")
# Shape (N, 3, 3). Final axis is (x,y,z).
tri_verts = verts[tris]
# Compute all 3 midpoints (over each edge):
v_pairs = [(tri_verts[:, i, :], tri_verts[:, (i+1)%3, :])
for i in range(3)]
print(f"midpoints")
tri_mids = np.stack([(vi+vj)/2 for vi,vj in v_pairs],
axis=1)
print(f"edge vectors")
# Compute normalized edge vectors:
diff = [vj-vi for vi,vj in v_pairs]
edge_vecs = np.stack([d/np.linalg.norm(d, axis=1, keepdims=True) for d in diff],
axis=1)
print(f"perpendiculars")
# Find perpendicular to all edge vectors:
v1 = any_perpendicular(edge_vecs)
v1 /= np.linalg.norm(v1, axis=-1, keepdims=True)
# and perpendiculars to both:
v2 = np.cross(edge_vecs, v1)
print(f"implicit curves")
isect_2d = intersect_implicit(f_arr)
curv_fn = implicit_curvature_2d(isect_2d)
print(f"gradients & curvature")
k = curv_fn(np.zeros((tri_mids.shape[0], 3, 2)), pt_center=tri_mids, u=v1, v=v2)
print(f"writing")
np.save("spiral_curvature.npy", k)
# for i,k_i in enumerate(k):
# for j in range(k.shape[1]):
# mid = tri_mids[i, j, :]
# k_ij = k[i,j]
# v1 = tris[i][j]
# v2 = tris[i][(j + 1) % 3]
# print(f"{i}: {v1} to {v2}, {k_ij:.3f}")

View File

@ -0,0 +1,84 @@
# To-do items, wanted features, bugs:
## Cool
- More complicated: Examples of *merging*. I'm not sure on the theory
behind this.
## Annoying/boring
- https://en.wikipedia.org/wiki/Polygon_triangulation - do this to
fix my wave example!
- http://www.polygontriangulation.com/2018/07/triangulation-algorithm.html
- Clean up examples.ram_horn_branch(). The way I clean it up might
help inform some cleaner designs.
- I really need to standardize some of the behavior of fundamental
operations (with regard to things like sizes they generate). This
is behavior that, if it changes, will change a lot of things that I'm
trying to keep consistent so that my examples still work.
- Winding order. It is consistent through seemingly
everything, except for reflection and close_boundary_simple.
(When there are two parallel boundaries joined with something like
join_boundary_simple, traversing these boundaries in their actual order
to generate triangles - like in close_boundary_simple - will produce
opposite winding order on each. Imagine a transparent clock: seen from the
front, it moves clockwise, but seen from the back, it moves
counter-clockwise.)
- File that bug that I've seen in trimesh/three.js
(see trimesh_fail.ipynb)
- Why do I get the weird zig-zag pattern on the triangles,
despite larger numbers of them? Is it something in how I
twist the frames?
- How can I compute the *torsion* on a quad? I think it
comes down to this: torsion applied across the quad I'm
triangulating leading to neither diagonal being a
particularly good choice. Subdividing the boundary seems
to help, but other triangulation methods (e.g. turning a
quad to 4 triangles by adding the centroid) could be good
too.
- Facets/edges are just oriented the wrong way...
- Picking at random the diagonal on the quad to triangulate with
does seem to turn 'error' just to noise, and in its own way this
is preferable.
- Integrate parallel_transport work and reuse what I can
- /mnt/dev/graphics_misc/isosurfaces_2018_2019 - perhaps include my
spiral isosurface stuff from here
## Abstractions
- This has a lot of functions parametrized over a lot
of functions. Need to work with this somehow. (e.g. should
it subdivide this boundary? should it merge opening/closing
boundaries?)
- Some generators produce boundaries that can be directly merged
and produce sensible geometry. Some generators produce
boundaries that are only usable when they are further
transformed (and would produce degenerate geometry). What sort
of nomenclature captures this?
- How can I capture the idea of a group of parameters which, if
they are all scaled in the correct way (some linearly, others
inversely perhaps), generated geometry that is more or less
identical except that it is higher-resolution?
- Use mixins to extend 3D transformations to things (matrices,
cages, meshes, existing transformations)
- I can transform a Cage. Why not a CageGen?
## ????
- Embed this in Blender?
## Future thoughts
- What if I had a function that could generate a Cage as if
from a parametric formula and smoothly vary its orientation?
My existing tools could easily turn this to a mesh. If I could vary
the detail of the Cage itself (if needed), then I could also
generate a mesh at an arbitrary level of detail simply by sampling at
finer and finer points on the parameter space. (This might also tie
into the Parallel Transport work.)
- What are the limitations of using Cages?
- Current system is very "generative". Could I do basically L-system
if I have rules for how a much is *refined*? What about IFS?
- Do this in Rust once I understand WTF I am doing
## Other thoughts
- Why do I never use the term "extruding" to describe what I'm doing?