Implement a triangle scan conversion routine and Z-buffer hidden surface algorithm. (assignment)
I implemented the algorithm in Matlab.
For "large" triangles (with all sides larger than 1 pixel), I interpolate all the values along all the scanlines as follows:
Note: the final two steps go across the whole image. This is arguably slow for large images, yet faster in Matlab than implementing loops for each scanline
For "small" triangles (with at least one of the sides less than 1 pixel) I just compute the means for all the values and choose the resulting point as "representative" for the triangle.
"Tiny" triangles ("small" triangles with area less than one pixel) are ignored.
Below are a few screenshots produced by my program. Besides generating random triangles, I wrote a program to read models posted for a similar assignment at MIT. The models have no depth information, so I assigned the same depth to all points. This leads to weird results depending on whether the replacement is also done for values of 1/Z that are equal, not only for the ones that are strictly greater than the previous values stored at the points (see the pictures of the weird cow and teapot below).
10 random triangles |
the hex cone, rendered from model |
100 random triangles, with R,G,B corners |
100 random triangles with random colors |
the cow, rendered from model |
the same cow, now looking weird |
the teapot, rendered from model, with its spout looking weird |
the teapot, with its lid looking weird this time, but with the correct spout |
Here are three movies showing the construction of the random, the cow and the teapot models. Be sure to have your speakers on ;).
At first I imagined a complicated time model, timing all the subintervals for various disjoint phases of the algorithm, but the regression did not work very well. The I came back to the simple:
This worked out ok for the random model, which contains more large triangles (the random values are usually far apart). However, for the cow and teapot models, most triangles are small, and I got a negative coefficient for Tpixel. This is probably due to the large difference in running times for very small "large" triangles that still get scan-converted and "small" triangles that are "optimized by averaging". This leads to a non-linearity in the data that is wrongly approximated by the negative coefficient.
Model
|
Tsetup
|
Tline
|
Tpixel
|
maxErr
|
random
|
1.054 e-1
|
4.9241 e-6
|
3.1194 e-6
|
8.1845 e-1
|
cow
|
6.2452 e-1
|
1.3882 e-2
|
-7.4002 e-4
|
1.0893 e 0
|
teapot
|
6.0369 e-1
|
8.4540 e-3
|
-2.5475 e-4
|
9.2657 e-1
|
Last modified on March 3, 2002 by .