This trainer is on a few demo effects (pixel morphs and static).

DENTHOR, coder for ...
_____   _____   ____   __   __  ___  ___ ___  ___  __   _____
/  _  \ /  ___> |  _ \ |  |_|  | \  \/  / \  \/  / |  | /  _  \
|  _  | \___  \ |  __/ |   _   |  \    /   >    <  |  | |  _  |
\_/ \_/ <_____/ |__|   |__| |__|   |__|   /__/\__\ |__| \_/ \_/
The great South African Demo Team! Contact us for info/code exchange!  

Grant Smith, alias Denthor of Asphyxia, wrote up several articles on the creation of demo effects in the 90s. I reproduce them here, as they offer so much insight into the demo scene of the time.

These articles apply some formatting to Denthor's original ASCII files, plus a few typo fixes.

Pixel Morphing

Have you ever lain down on your back in the grass and looked up at the cloudy sky? If you have, you have probably seen the clouds move together and create wonderful shapes… that cloud plus that cloud together make a whale… a ship… a face etc.

We can’t quite outdo Mother Nature, but we can sure give it a shot. The effect I am going to show you is where various pixels at different starting points move together and create an overall picture.

The theory behind it is simple: each pixel has bits of data associated with it, most important of which is as follows:

This is my color
This is where I am
This is where I want to be.

The pixel, keeping its color, goes from where it is to where it wants to be. Our main problem is how it moves from where it is to where it wants to be. A obvious approach would be to say “If its destination is above it, decrement its y value, if the destination is to the left, decrement its x value and so on.”

This would be bad. The pixel would only ever move at set angles, as you can see below:

                Dest   O-----------------\
                                           \  <--- Path
                                                O Source

Doesn’t look very nice, does it? The pixels would also take different times to get to their destination, whereas we want them to reach their points at the same time, i.e.:

     Dest 1   O-------------------------------O Source 1
     Dest 2   O-----------------O Source 2

Pixels 1 and 2 must get to their destinations at the same time for the best effect. The way this is done by defining the number of frames or “hops” needed to get from source to destination. For example, we could tell pixel one it is allowed 64 hops to get to its destination, and the same for point 2, and they would both arrive at the same time, even though pixel 2 is closer.

The next question, it how do we move the pixels in a straight line? This is easier than you think…

Let us assume that for each pixel, x1,y1 is where it is, and x2,y2 is where it wants to be.

   (x2-x1) = The distance on the X axis between the two points
   (y2-y1) = The distance on the Y axis between the two points

If we do the following:

  dx := (x2-x1)/64;

we come out with a value in dx which is very useful. If we added dx to x1 64 times, the result would be x2! Let us check…

  dx = (x2-x1)/64
  dx*64 = x2-x1         { Multiply both sides by 64 }
  dx*64+x1 = x2         { Add x1 to both sides }

This is high school math stuff, and is pretty self explanatory. So what we have is the x movement for every frame that the pixel has to undergo. We find the y movement in the same manner.

  dy := (y2-y1)/64;

So our program is as follows:

  { Set x1,y1 and x2,y2 values }
  dx:= (x2-x1)/64;
  dy:= (y2-y1)/64;

  for loop1:=1 to 64 do BEGIN
    putpixel (x1,y1)
    clear pixel (x1,y1);

If there was a compiler that could use the above pseudocode, it would move the pixel from x1,y1 to x2,y2 in 64 steps.

So, what we do is set up an array of many pixels with this information, and move them all at once… voilá, we have pixel morphing! It is usually best to use a bitmap which defines the color and destination of the pixels, then randomly scatter them around the screen.

Why not use pixel morphing on a base object in 3d? It would be the work of a moment to add in a Z axis to the above.

The sample program uses fixed point math in order to achieve high speeds, but it is basically the above algorithm.


A static screen was one of the first effects Asphyxia ever did. We never actually released it because we couldn’t find anywhere it would fit. Maybe you can.

The easiest way to get a screen of static is to tune your TV into an unused station … you even get the cool noise effect too. Those people who build TVs really know how to code ;-)

For us on a PC however, it is not as easy to generate a screen full of static (unless you desperately need a new monitor)

What we do is this:

  • Set colors 1-16 to various shades of grey.
  • Fill the screen up with random pixels between colors 1 and 16
  • Rotate the palette of colors 1 to 16.

That’s it! You have a screenful of static! To get two images in one static screen, all you need to do is fade up/down the specific colors you are using for static in one of the images.

A nice thing about a static screen is that it is just palette rotations … you can do lots of things in the foreground at the same time (such as a scroller).

In closing

Well, that is about it … as I say, I will be doing more theory stuff in future, as individual demo effects can be thought up if you know the base stuff.

Note the putpixel in this GFX3.PAS unit … it is very fast .. but remember, just calling a procedure eats clock ticks… so embed putpixels in your code if you need them. Most of the time a putpixel is not needed though.