picture home | art | events | music | rack extensions | downloads | pixel blog | qt_tools

omino pixel blog

pixels, motion, and scripting
david van brink // Tue 2007.12.18 00:47 // {after effects}



click for
Sometimes you just say, Frame-to-frame coherence, who needs it?

Play with the movie controller, step through the frames. I think it’s wonderful how most of the frames are nearly unrecognizable, but in the moving aggregate it’s quite easy to see what the action is. More or less, anyway.

The above movie was rendered using a limited color palette and error diffusion. Error diffusion is one of the solutions to the problem: Given this image, and these colors, how can we best draw it? (Wikipedia says it’s a kind of halftoning.) Here are four examples.

The top left image has been drawn with just black and white. The algorithm starts on the top left pixel and says, Hmm, black is the closest color, but just a little bit off. It adds the “little bit off” to the next pixel, and, eventually, needs to draw a white pixel. But white is too white… so it’s back to black pixels for a while. And so on.

The other three images are derived from other available source colors, and variations such as damping or exaggerating the error amount, walking in different directions (up, down, left, right), and forcing minimum stripe sizes.

This is done with the omino diffusion filter, available in my free After Effects plugins. Download here. (I promise to soon post the universal binary version of these, for CS3! Just testing them now.)

The movie above was created by randomizing every single parameter. Fully random, no wiggle, no keyframes… new values every frame.

I suppose the original motivation for “error diffusion” was to strive towards image fidelity. Hmm. Aah. Joy.

Comments are closed.

(c) 2003-2013 omino.com / contact poly@omino.com