Our Purpose

The Pixel-Planes Project is a research group dedicated to building graphics engines with an emphasis on scalability and real-time rendering. It is also our goal to provide hardware and software platforms upon which new graphics and computer-interaction techniques can be explored.

The project's name reflects one of the principle techniques we use for fast rendering: the basic building block of all of our systems is a plane of processors, each with a few bytes of its own memory, operating in unison. Each pixel (picture element) on the screen is associated with a unique processor. Since each processor knows its x and y screen coordinates, we can send out the equation for a line (a plane equation, really), and each processor can compute on which side of the line it is. If we send out the equations for three lines, we can easily find out which processors are inside the lines by disabling those which are outside. By using the same plane equations, we can also quickly compute how far away from the viewer's perspective each pixel is. Then, we can shade and texture each of these pixels in a similar rapid manner. By retaining the depth information, each pixel can tell if its old value was closer than the new value and stop participating in the computations if this is so.


Beginnings

The first machine built here at UNC under the project was Pixel-Planes 2, a very humble machine indeed, by today's standards. As you can see above, Pxpl2's screen resolution was 4x64 pixels, and only 16 bits of memory per pixel, and was only able to display a few polygons per second. Yet, even with those statistics, this early prototype showed the power of the concepts.

Pixel-Planes 4

Pxpl4, as it is called for short, is a machine that has an array of 512x512 processors (yes, over a quarter-million!) operating in synchrony. Each pixel processor has 72 bits of memory for its disposal. The video image is produced directly from what is stored in this memory. A front-end processor based upon a Weitek chip set performs the initial geometry computations, and the results of this step are fed to the processor array. This machine could draw at the rate of about 40,000 triangles per second.

Pixel-Planes 5

The problem with Pxpl4 is that as scenes became more complex, the triangles (and other graphics primitive that comprise the scene) became smaller and more numerous. Thus when a small triangle is being drawn, most of the pixel processors are idle since they are not part of this triangle. This means poor efficiency.

Pxpl5 solves this problem by breaking up the plane of the screen into multiple tiles, each of which can be performing computations independently. Thus instead of a single 512x512 array, there are multiple 128x128 arrays (up to about 20 of them, configurable). Each of these processors has 208 bits of primary memory available, plus 4096 bits of secondary memory. In addition, instead of a single geometry processor, there are many (up to about 50 of them, also configurable). The Intel i860 was chosen for this task. Rather than generate the video image from the memory of the processor arrays, the data is sent to a separate frame buffer. This also allows the use of multiple frame buffers of various types, including a high resolution (1280x1024) model. Connecting together all of these pieces is a high-speed ring network, with 640 megabytes-per-second bandwidth. The net result of such a system is a rendering rate of over 2 million triangles per second.


PixelFlow

The problem with Pxpl5 is that the ring network provides a hard limit on its scalability. The next generation machine, PixelFlow (PxFl) solves this problem by using a technique called "image composition." In this system, pairs of geometry processors and array processors work independently to create screen-sized images based on the subset of the triangles that they have. These partial images are then combined into one by performing a depth-wise sort using a special high-speed image composition network.

It is our hope that PixelFlow will be the first among many of fully linearly scalable machines --- that is, for double the amount of hardware, you can double the speed. Aside from pure rendering performance, we also plan to use PixelFlow as a platform on which we can demonstrate a variety of new and powerful rendering techniques, including a real-time shading language.

On the path toward a working PixelFlow system, we have built both hardware and software PixelFlow simulators. The office scene above was generated with an early versions of the hardware simulator. The simulators also help us to better understand the details of the architecture that we are creating. To this end, we rendered a series of forty 640x512 24-bit color 5-sample anti-aliased frames from a bowling scene on our simulators. The analysis of the rendering simulators showed that this sequence could be rendered at thirty frames per second by PixelFlow.

| Home | News | Research | History | Publications | People | Technology Transfer | Support |

Last updated 6/22/98 by Jim Mahaney

Break out of Frames