Vein Structures

In honor of Halloween, I thought I’d post some spooky pictures and discuss how they were made. But first, please take a minute to check out my newly redesigned website, which is up and running at http://davidbachmandesign.com. A special thank you goes out to my friend, Ben Benjamin, for his advice on the design.

Anyone who is interested in 3D printing, art, design, and mathematics has most likely come across the work of Jessica Rosekrantz and Jesse Louis-Rosenberg at their company, Nervous System. Many of their designs for lamps, clothing, furniture, etc are generated with biologically inspired algorithms. One of my favorites is a series of lamps they call “Hyphae”. (Photo shown here with permission.)

hyphaelamp_185_large

As they explain here on their website, these lamps were made from their implementation of an algorithm to simulate the veins in a leaf, detailed in the paper  “Modeling and visualization of leaf venation patterns” by Adam Runions & co. (available here). For anyone interested in understanding this algorithm, I highly recommend watching the animation of it on the Nervous System website.

After spending some time understanding this algorithm myself, I decided to try my own implementation. I was interested in growing veins on a meshed surface of arbitrary topology, which turned out to require some significant modification of the original algorithm. After a month or so of fiddling, I finally started to get some results I was happy with, like the veins on this skull:

And this “eyeball”: eyeballsideview

My algorithm is mostly implemented in a Python box of a larger Grasshopper script. Before the algorithm is run the user selects a vertex of the mesh to be the “root,” and a collection of vertices to be the “sources.” These selections are accomplished with the “Select Mesh Vertices” component of the Grasshopper Mesh+ plugin. (As an aside, my Grasshopper textbook is well underway. Look for an announcement in the next few months!)

Here’s a brief description of my algorithm, with a few simplifications at each step to (hopefully) make it understandable. Feel free to contact me for more details. My apologies for any incomprehensible technical jargon. If you get bored, just scroll down to the end for one more picture! In each iteration of the algorithm we decide how to grow a tree (initially the root vertex) toward the sources.

  1. Compute the shortest path from each source to the tree. Each vertex of the tree at the endpoint of at least one such path is a “growth site.” In the next steps, we determine in which direction to grow from each growth site.
  2. Weight the edges adjacent to each growth site so that an edge picks up one unit of weight if it is the initial edge of a shortest path between the tree and some source.
  3. Find the edge emanating from the growth site closest to the “weighted center” of all of the edges adjacent to it.
  4. “Grow” the tree in the direction of this edge by adding it to the tree.
  5. If the new edge touches a source, remove that source from the collection of sources.
  6. Repeat until all sources are gone.

The above algorithm creates a polygonal tree through the mesh edges. To get a nice veiny structure, only the vertices of each branch are kept track of, together with the total of the edge weights at each. Those vertices are used to construct NURBS curves, and a tapered pipe is made around each curve with radii proportional to the weights.

Unfortunately, the algorithm is very time consuming. The detail of the resulting vein system is limited by how fine of a mesh is used, but in a finer mesh the shortest-path computations, which have to get re-run with every iteration, take much longer. This is why the vein systems I show here are nowhere near as intricate as the one in the Nervous System photo above.

torusvines

There’s been a lot of beautiful work creating tree structures in Grasshopper. For example, there are really nice forum discussions featuring an algorithm of Daniel Gonzales Abalde and various experiments with it, primarily by Nik Wilmore. While these algorithms produce some amazing results in 2 and 3 dimensions, the kinds of trees they produce are structurally different than the ones I’ve shared here. Shortly I will post my own implementation of that algorithm to arbitrary meshes and compare the results.

Vein Structures

3D scanning for fun and profit

Most of my posts so far have been about experiments or challenges in 3-dimensional design for 3D printing. However, it’s possible to get into 3D printing without knowing much about design software, if you have a 3D scanner. In theory, to create an object suitable for 3D printing you can just sculpt it by traditional methods (e.g. carve wood, chisel stone, build up clay, etc.), and scan it.

Unfortunately, in my experience this is often much harder than it sounds. The cheapest scanners are easy to use, but often do not capture enough detail. Slightly better scanners are not as user-friendly and professional grade scanners are out of my price range (they’re often over $10K). Here I’ll post my experience with the scanners I’ve used, and some tips I’ve picked up along the way for successful scanning.

The first scanner I tried was the Sense, by 3D Systems. It’s about $400, and is available for both Mac and PC. There is a newer model now than the one I used, and I don’t know how many improvements they’ve made since then. Basically, the one I used looks like an industrial stapler, and must be tethered to a computer. You aim it at an object, and walk around it. The software tracks the object in real-time, and tries to reconstruct its shape in 3D.

The best thing about the Sense is its cost. It’s cheap, and does a reasonable job with human-sized objects. However, it’s hard to walk around an object with the unit while you are tethered to your computer. It also loses its tracking about 60% of the time, and every time that happens you have to start over.  There are probably tricks to avoid this, but I never found anything that worked great.

With that all said, I did some fun projects with it. The one I like the best was when I scanned all of my family members and turned us into a chess set. I’m the king, my wife the queen, my two boys are the rook and bishop, my dog is the knight, and the Roomba that scurries around our house is the pawn (my son’s idea). With the exception of the Roomba, each was scanned, and accessories (stands, crowns, glasses, and a little dog house) were added afterwards on the computer.  The Roomba was modeled from scratch in Rhino.

An even cheaper low-end 3D scanner isn’t really a scanner at all. 123D Catch, by Autodesk, is a fantastic free smartphone app that often does the job quite nicely. At the time I made the above chess set this program was still giving fairly crude results. Since then it’s improved so much that it is now my go-to solution for quick, cheap scans of relatively large objects. The way it works is that you walk around your object and take lots of pictures. Those pictures get uploaded to Autodesk’s servers, which grind them through some presumably  very sophisticated image processing software. After a relatively short wait, they then provide you with a 3D file of your image. The cool thing is that scale is not an issue. You can scan things as big as a building, or as small as a person. However, very small items are tricky, as they will try to reconstruct everything that your camera has taken a photo of. This instructable has some great information on how to deal with this, although I’ve never personally had much success with using 123D catch to scan small objects.

In the last few years I’ve gotten a lot more serious about my design work, and have at times needed a professional quality scanner. As I mentioned above, most of those are well over $10K, which I just can’t afford. The first thing I tried was an ultra-cheap solution: the Ciclop scanner, a diy open source laser scanner by bq which can supposedly capture very fine details.  It uses two cheap lasers pointed at a small object on a turntable, and a webcam to capture the resulting pattern on the object. I spent hours getting it all put together and set up, but was never able to get a decent scan, so eventually I just gave up in frustration.

Finally, I decided I should bite the bullet and spend more than a few hundred bucks, but I still didn’t have near enough for a high-end machine. The best compromise I found is the Einscan-S, which sells for about $1000. This machine projects a changing white-light pattern on an object, and captures the result in stereo with two cameras. The model I bought came with a turntable, but I never found that very useful.

The Einscan software allows for a turntable mode and a free-scan mode. What I normally do is place an object on the turntable from the Ciclop scanner, and use free-scan mode to capture lots of images of it in different positions. Each time an image is captured, the software reconstructs more of its 3-dimensional form. It rarely loses tracking from one image to the next, and when it does you can just discard the last image. Here’s a scan I did recently of a bronze statue of Alice (from Alice in Wonderland), by the artist Karen Mortillaro. (More on my collaboration with her in a later post!).

Notice the white powder on the original bronze statue in the photo. That’s a standard trick in 3D scanning, where you evenly coat your object with baby powder so it gets picked up by the scanner’s cameras better.

You can see in the picture that the smallest details, like the texture of the hair, are not present in the scan. That may be the difference between a professional quality scanner and an almost-professional quality one, although those particular details may be too small even for the best scanners.

The Einscan-S is not as easy to use as, say, 123D catch. However, it definitely picks up a lot more detail. It does not detect color information, which is fine if you are like me, and only interested in 3D printing the resulting objects. The biggest drawback is the size limitations. I have only been able to get it to capture objects less than 12″ tall.

Einscan now has a “pro” version that looks a lot like the Sense. They claim it will capture even more detail than the “S” version, larger sizes, and color. I don’t know how many of their claims are accurate, and it costs about three times as much. If I ever get to play with one, I’ll post those results here!

3D scanning for fun and profit