Skip to content

Nature of Code: the final, and saddest day.

April 26, 2011

I cannot believe the semester is already over.  Clearly I’ve been remiss in posting about Nature of Code here.

It’s all on the class wiki here:  I’ll link to this work individually once the dust settles.


I find again and again that natural motion provides rich material for the study of animation.  Last semester in Big Screens, Patrick Hebron and I were inspired by Harold Fisk’s geological maps of the meandering motion of the Mississippi River over thousands of years to generate an algorithmic piece called MeandriCA.

Here’s a video of that project:

MeandriCA by Morgen Fleisig & Patrick Hebron from Patrick Hebron on Vimeo.



NoC Final

Harold Fisk’s maps continue to suggest new projects, and I’ve worked with them still further in the investigation of other algorithmic techniques this semester in Nature of Code.

Here are two screen shots of my latest sketches:

Java Applet and Processing Code:

Java Applet and Processing Code:

So what’s going on here and why?

Ever since Patrick introduced me to the darker side of image mapping–specifically of generating images with Perlin Noise and swapping them out with interchangeable arrays on the graphics card–I’ve been somewhat obsessed with the idea of images within images, of hidden images within more obvious ones.  Perspectives, erasures, tracings, reflections in still lifes, anamorphoses, architectural drawings in their two-dimensional projection of the three-dimensional world, Abbott’s Flatland–all are examples of this.

In much the same way, I’ve been fascinated with other ways of generating images besides Perlin Noise, and it occurred to me this semester that any array would do–they’re just numerical values.  Perlin Noise has the advantage that the adjacent values have a smooth relationship, but any values will do, as long as their inherent relative qualities are exploited.

For this project, I had a simple aim: map the motion, color, size and any other relevant qualities of a particle to the pixels of an underlying image.  In presenting this at the Midterm, Dan suggested I start with a much simpler image, and it occurred to me at that moment that the motion behavior and other qualities did not necessarily need to come from the same image.  That’s what I’ve implemented here.  The motion is being mapped by a vector field generated by a simple representation of a segment of the meandering Mississippi, reduced to black and white:

If you are saying to yourself that there’s something wrong with that vector field, you’d be right.  I’ll come back to that shortly.

The first image has hard-coded colors, but the second one is sampling the colors from Harold Fisk’s maps themselves:

Rabbit Holes

or What I’ve learned and what still needs to be done.

So what’s up with those vectors?  I realized yesterday morning after painstakingly doctoring a pixellated seed image of the Mississippi that all this time I’ve been adding my vectors incorrectly.  I’ve been doing this:

          field[i][j] = PVector.add(v0,v1); //add self to neighbors
          field[i][j] = PVector.add(v0,v2);
          field[i][j] = PVector.add(v0,v3);
          field[i][j] = PVector.add(v0,v4);
          field[i][j] = PVector.add(v0,v5);
          field[i][j] = PVector.add(v0,v6);
          field[i][j] = PVector.add(v0,v7);
          field[i][j] = PVector.add(v0,v8);

Thinking I’ve been adding them together, I’m actually simply adding v0 and v8: each addition step is overwritten by the previous one–they’re not cumulative.  I should be doing this:

          PVector temp = new PVector(0,0);
          temp = PVector.add(temp,v1);
          temp = PVector.add(temp,v2);
          temp = PVector.add(temp,v3);
          temp = PVector.add(temp,v4);
          temp = PVector.add(temp,v5);
          temp = PVector.add(temp,v6);
          temp = PVector.add(temp,v7);
          temp = PVector.add(temp,v8); 

          field[i][j] = temp;

I fixed that, and then got this:

I tried it with a simpler gradated field, but still got a good deal of chaos:

I tried it with a simpler image in which the field is all black:

I corrected the algorithm and got most of the horizontal vectors tracking the river’s course, which is exciting, but I am still flummoxed by what’s happening with the boids at the turns.  Going through pixel by pixel, I realized that the algorithm works well now on the straightaways, but in the turns the fact that it is adding the previous and subsequent pixels together is throwing off the resulting vector.  Effectively, I’m evaluating the pixels as free-body diagrams, multiplying adjacent brightnesses by their respective vectors, treating them as forces in tension:

Clearly I need to adjust the algorithm so that it ignores the pixel it just came from, and I’m going to come back to that, but as Falstaff said, “The better part of valour is discretion.”  Having not gotten past rendering the flow field as triangular boids, I decided to set this enquiry aside for a bit and work on rendering boids instead.

Boids as Particles.  My plan all along had been to render the boids as the particles I had developed based on Karl Sims’ Siggraph paper, “Particle Animation and Rendering Using Data Parallel Computation”.  The reason being that the particle more closely approximates the behavior of a water droplet with a head and tail, for the reasons he describes.

• I had worked this particle out to some extent in week 4:

My other plan was to continue on with the work I had done subsequently, using an underlying image for the particle to sample color.  This I had worked out in week 6 for the Midterm.

• Here I’m sampling from a photo I took in Oregon of a cow:

• Here I’m sampling from Harold Fisk’s map:

So the main work in all of this was assembling these different strategies.

Next Steps.

1. Fix the flow field algorithm.

A. In the case of a black background, track the previous pixel and do not add it’s vector to the resultant.

B. The gradated backgrounds are not helping at the moment.  After mastering the algorithm for the bright pixels the next simplest step might actually be to write a loop that checks adjacent dark pixels and adjusts them based on how near they are to a bright pixel.  Gradating the pixels by hand is also a massive pain… more about that below.

C. Implement a better algorithm that modifies non-river pixels over time by adding a fraction of the value of the river pixel to them, emulating erosion.

2. Modify the color sampling algorithm so that the particle maintains the color from birth to death.

3. Implement the particles’ lifetime.  I like the current density that the Iterator produces, but it obviously ultimately bogs down the processor.

4. Implement a layered background.  One idea that Patrick and I have been talking about for a long time is how to also algorithmically generate man-made intrusions into the landscape, such as the property lines visible on Fisk’s maps.  It is interesting that the lots were originally laid out perpendicularly to the banks of the Mississippi–in Louisiana at least–but that over time, that geometric condition becomes less obvious as the river meanders.  Sometimes the property even becomes submerged and then ends up on the opposite bank.

5. Pixel doping.  Optimally, this project would work with a normal image, for example:

Once I get the field flow algorithm working properly I think it will, but there’s a lot too that, and until then I’ll be working with very simple pixellated images.  I began by posterizing the image above in Photoshop:

Then, on Dan’s suggestion, I reduced it to the ultimate number of columns and rows in the flow field to make evaluation easier.  This resulted in a 30×35 pixel grid:

I upped the contrast on it four times, and then went in and doctored the pixels individually with the pencil tool:

Based on my thinking about the pixels in the straightaways (see above), I felt compelled to go in and increment the brightness of each pixel in the river up to 255 at the bottom of the screen:

Many thanks to Don Miller for recommending Pixen (–it’s way easier to use than Photoshop for stuff like this.  It crashed frequently, however.  I’m not sure whether it’s a Snow Leopard issue, insufficient memory, or what, but save frequently.  I colored the balance of the river so that I could keep track of which pixels I had finished.  I spent a lot of time feathering the field too, and now I wish I had not because that did not solve my field flow issues:

Optimally, this work would happen in the body of the code.  I got this far with that project before I set it aside as well:

(Actual Size)

I think there are perhaps two people at ITP that would be marginally interested in me finishing it.  Here’s a Pixen image of it:

Java Applet and Processing Code:

A final footnote: all of this work is premised on the idea that a flow field is the optimal approach.  During my Midterm presentation, several other options were discussed–path following, attractors and repellers, and an actual physical model in Box 2D.  Each of course would produce a different effect, and I would like to explore each to better understand how to exploit those effects.  For example, I worked up this sketch “Meander #9” based on Dan’s path following examples back in October when Patrick and I were developing our ideas for Big Screens:


Much credit for my code goes of course to Dan Shiffman, Craig Reynolds and Karl Sims for their work, and I’ve given them credit in the body of the code where I hope it is most appropriate.  I must add Harold Fisk and Sid Gray for turning me onto him, Patrick Hebron, and Lewis Carroll and John Tenniel as well.


From → ITP, Nature of Code

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: