Tools for final video: Photoshop, Twixtor, After Effects, PaulStretch
First, I found Metroid spritesheets on the internet. I used Photoshop to isolate frames of running animation, then created an action to resize each frame by a fixed amount using nearest neighbor interpolation. I imported the series of frames in After Effects, applied Twixtor to interpolate and morph between frames. Then I created a comp with the Twixtor output and moved it across the screen. This comp was placed in another comp with an echo effect.
I made this video twice. I started in After Effects since it is a great platform for creating motion graphics. Also, the plugin that I used to morph between frames of animation on the sprites was for AE so it provided a clear workflow. When I started rendering the product in After Effects, however, I found that the estimated time to complete the render continually grew in value (which makes sense – it was necessary to continually add one more past frame to the render queue until the half-way point of the video had been reached). I understood why the render was taking so long, but, at an estimated 50 hours it was just too darn long, especially if I needed to iterate. So I decided that I would take an afternoon to make an optimized approximation of the effect in Processing, thus p55-Echo was born.
P55-Echo is an approximation of the Echo effect in AE. There are two rendering modes. One renderer is fast. It uses as a simple frame accumulator, where a buffer stores past echoes so each consecutive frame only requires the blended composition of one new frame before rendering out to disk. Easy. The slow rendering mode does it ‘right’, like After Effects, and re-renders all echoes every frame at the appropriate opacity. The tool is easy to use: feed it a directory of .tif images and it spits out a series .tifs. Rendering options are described via comments in the code. Look at a sample (fast) render. Check out Processing and p55-Echo – both are free.
The p55-Echo code could stand a refactoring to make it more modular and easier to interact with programmatically. Since I’m not happy with the quality of the output due to how Processing handles color, I have no further personal interest in the code but would be happy to clean it up if others have use for it.
Proper re-rendering of past frames + > 8bpc pipeline creates a higher-quality final output
Unfortunately, if not every past frame is echoed, there will be a visible ‘crawl’ since the frames being used for echoes will advance by one with each progressive frame of the source clip. Plus it takes forever (this is not a statement against Adobe’s engineering; it is slow because it is being rendered without any quality-compromising shortcuts).
This solution is about 66x faster than performing a similar operation in After Effects. It also fixes the frame-crawling problem since it will echo, say, all past frames that are a multiple of 3, rather than the frame that is 3 behind the current frame, then the frame 3 frames behind that, etc.
Lower image quality. Processing stores color information in its 32-bit, 4-channel ‘color’ datatype. That means that multiplying a color channel by a decimal will result in rounding to the nearest int, which causes some banding with certain iterative operations. In the case of cumulative blend modes like Screen or Add, since past echoes are not being re-rendered, anything that clips to white will fade out to grey then to black instead of being re-calculated per frame to cycle through their actual spectrums of colors.