picture home | art | events | music | rack extensions | downloads | pixel blog

omino pixel blog

pixels, motion, and scripting
david van brink // Tue 2009.08.4 17:40 // {after effects extendscript}

AE: Scripting Notes

Just a quick note about a bug and an optimization when scripting After Effects CS4 (and probably earlier versions, too).

addProperty() bug

When adding several effects, each addition invalidates the object variable references to earlier ones. Here’s a code fragment which shows the problem and the solution.

var comp = app.project.activeItem;
var layer = comp.layers.addNull();
layer.name = "null_layer";

var slider1 = layer.Effects.addProperty("Slider Control");
slider1.name = "s1";

var slider2 = layer.Effects.addProperty("Slider Control");
slider2.name = "s2";

// At this point, slider2 is valid, but slider1 is mysteriously not!
// Any action or reference to slider1 will cause an "invalid object" error
//
// What can we do?
// 
// Fortunately, they have names by which we can recover them

slider1 = layer.Effects.property("s1");
slider2 = layer.Effects.property("s2");

// Now they're both good to go.

setValueAtTime() Gets very slow!

If you do a whole lot of setValueAtTime() calls to set keyframes, the script will run very slowly. Fortunately, you can just call setValuesAtTimes(), the plural form, to set many at once, which is much more efficient! Makes the minutes seem like seconds, Captain.

// this will be very slow, if myData has more than a few dozen items
for(var i = 0; i < myData.length; i++)
	prop.setValueAtTime(myData[i].t,myData[i].v);

// but if we build up our arrays first...
var timesArray = new Array();
var valuesArray = new Array();
for(var i = 0; i < myData.length; i++)
{
	timesArray.push(myData[i].t);
	valuesArray.push(myData[i].v);
}
// and set them all at once
prop.setValuesAtTime(timesArray,valuesArray);

// it will go lickity-split!

(Thanks creative cow thread!)

oh, i dont know. what do you think?


david van brink // Mon 2008.10.6 06:46 // {pixel bender}

Pixel Bender: mod() bug

The mod() Function

In GL Shading Language, and Pixel Bender, the mod function is defined like so: mod(x,y) => x - y * floor(x / y). The result is always positive. We can see quickly that y’s sign goes away. And the floor function pushes towards negative infinity so y * floor(x / y) is always “lower” (not “smaller”) than x.

The mod() Function, GPU vs CPU

Here is a short Pixel Bender kernel.

<languageVersion : 1.0;>
kernel modTest2 <namespace : "x";vendor : "omino.com";version : 1;description : "mod() bug";>
{
    // on PBT build 35, produces different display on GPU/CPU
    parameter float span;
    output pixel4 dst;

    void evaluatePixel()
    {
        float x = outCoord().x / span - 1.0;
        float y = outCoord().y / span - 1.0;
        float m = mod(x,y);
        float t = x - y * floor(x / y); // computed by definition
        dst = float4(m / y,abs(m) / y,t / y,1);
    }
    
    region generated(){return region(float4(0,0,2.*span,2.*span));}
}

On the GPU, it produces:

And on the CPU:

The blue component is correct on both CPU and GPU because it’s computed according to the formula. The other two color components reveal the nature of the bug. (Probably that x/y is being rounded towards zero.)

As an aside: Notice that some of the CPU rendering’s lines are quite a bit smoother than the GPU rendering’s. Subtly different arithmetics.

The Workaround

A workaround might not be needed, in practice. The mod() function seems to work correctly in Flash, even though it’s on the CPU. (The PBT and Flash implementations are, clearly, if surprisingly, different, since you can turn off the Flash errors & warnings in PBT. Also confirmed in this note by Adobe engineer Tinic Uro.)

On the other hand, After Effects CS4’s CPU-renderer might still have the bug. On the gripping hand, perhaps Adobe will fix it… I’ve had trouble signing up onto the JIRA system to report it, though.

But to get correct render results on the CPU within PBT build 35 on Macintosh, simply use the expanded formula.

float m = mod(x,y); // may have a bug
float m = x - y * floor(x / y); // always works.
4 comments
Kerry // Sat 2008.10.18 22:1010:10 pm

> Notice that some of the CPU rendering’s lines are quite a bit smoother than the GPU rendering’s. Subtly different arithmetics.

Yes. The GPU’s computations are using a floating-point scheme that is inferior to the CPU’s for improved performance. (It uses fewer bits and a faster but less well behaved rounding method.)

As stated by William Kahan, who architected IEEE Std 768,
“Gresham’s Law for Computing:
The Fast drives out the Slow even if the Fast is Wrong.”

david van brink // Sun 2008.10.19 13:321:32 pm

Yes!

Even more distressing is that the default arithmetic results using doubles (doubles!) under GNU-C can produce different results on Power PC and Intel. (Presumably there’s a hi-fi compiler option someplace.)

Kerry Veenstra // Sun 2008.10.19 15:563:56 pm

Hee hee! A history of floating-point computer arithmetic would fill a book. PowerPC’s floating point is pure IEEE Std 754, but none of Intel’s floating point is. (And that’s pure irony! Kahan’s work on the 8087 is what makes IEEE Std 754 so good.) The original 8087 used 10-byte numbers with 64-bit mantissas, but IEEE doubles use 8-byte numbers with 53 bit mantissas. To get results on an 8087 that are close to those of 8-byte IEEE doubles, one rounds the Intel 64-bit mantissas to 53 bits. Unfortunately, rounding first to 64 bits (as the 8087 does) and then rounding to 53 bits is not the same as rounding straight to 53 bits (as PowerPC does). So the results on PowerPC and on 8087 are slightly different. (“8087” means all x86 CPUs through the Pentium III.)

Then, with Pentium 4, Intel changed its floating-point strategy and implementation. First, Intel’s own compilers avoided the 8087-compatibility registers and instead used the processor’s SIMD media instructions for scalar computations. Since the SIMD instructions support 8-byte floating point numbers but not 10-byte 8087-like numbers, it sounds like one will get results that are the same as on the PowerPC. Unfortunately, Intel also went away from IEEE Std 754’s “gradual underflow.” Gradual underflow lets the CPU represent numbers between 2^(-1022) and 2^(-1074) without underflowing straight to zero.

As unlikely as their appearance may seem, these values will show up occasionally. And unfortunately the Pentium 4 emulates gradual underflow in software. By default such emulation is turned off–and for a good reason. Once a computation starts underflowing gradually, subsequent computations that use the gradually underflowed result also underflow. One sees a noticable hit in performance as a thread of execution drags repeatedly through the emulation library routines. In media processing, such a hit is unacceptable. So on the Pentium 4 gradual underflow emulation is disabled by default, and you get a different floating-point result than you would get on a PowerPC.

Kerry // Sun 2008.10.19 18:126:12 pm

And BTW, if the gradual-underflow issue can be dealt with, it’s likely that *floats* will give the same result on Intel and PowerPC. This is because rounding to 64 bits first and then to 23 bits (as Intel does) gives the same result as rounding to 23 bits directly (as PowerPC does). I never found the proof, but I remember that rounding to 2x (or more) bits before rounding to x bits does not change the result of the final rounding.

oh, i dont know. what do you think?



0.0319s
(c) 2003-2023 omino.com / contact poly@omino.com