How to draw the triangle on the signal display using position of triangle vertices?
Let say we have a method called inside(tri,x,y):
1 2
definside(tri, x, y): return1if (x,y) in tri else0
But how do we know (x,y) in tri?
How do we sample locations?
1 2 3 4 5
for( int x = 0; x < xmax; x++ ) for( int y = 0; y < ymax; y++ ) float P_x = x + 0.5; float P_y = y + 0.5; Image[x][y] = f(P_x, P_y); //Sample location middle point by adding 0.5
Line Equation Method
Use Triangle = Intersection of Three Half-Planes rule
each line defines two Half-Planes
L(x,y)=Ax+By+C
On line : L(x,y)=V⋅N=0
Above line : L(x,y)=V⋅N>0
Below line : L(x,y)=V⋅N<0
What is V⋅N here?
We first derive a Line Tangent Vector T from the line on our plane:
T=P1−P0=(x1−x0,y1−y0)
T is the Line Tangent Vector
P1,P0 the two points of a line
Then we get the General Perpendicular Vector in 2D, denote as N which is the perpendicular of T.
Since
Perpendicular(x,y)=(−y,x)
Then
N=Perpendicular(T)=(−(y1−y0),x1−x0)
After that we can define any point connect to P0 as P and have a vector V.
// model -> camera matrix float4x4 lookatMatrix(const float3& _eye, const float3& _center, const float3& _up)const{ // transformation to the camera coordinate float4x4 m; const float3 f = normalize(_center - _eye); const float3 upp = normalize(_up); const float3 s = normalize(cross(f, upp)); const float3 u = cross(s, f);
Turning Camera Coordinates into Normalized Device Coords
when we say “clip space”, it is equivalent to the normalized device coordinates
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
// rasterizer voidRasterize()const{ // fill in plm by a proper matrix const float4x4 pm = perspectiveMatrix(globalFOV, globalAspectRatio, globalDepthMin, globalDepthMax); const float4x4 lm = lookatMatrix(globalEye, globalLookat, globalUp); const float4x4 plm = mul(pm, lm); /* plm here means prespective matrix, lookAt matrix (or view matrix) and model matrix. The first two matrices are decided by the camera and the last matrix is decided by each model (here I think model matrix is Identity for the assignment). You can get plm matrix by multiplying the three matrices together and this matrix provides you a convertion from model space to clip space. You need to be careful about the order of the multiplication. */ FrameBuffer.clear(); for (int n = 0, n_n = (int)objects.size(); n < n_n; n++) { for (int k = 0, k_n = (int)objects[n]->triangles.size(); k < k_n; k++) { objects[n]->rasterizeTriangle(objects[n]->triangles[k], plm); } } }
Make the Homogeneous coordinate (x, y, z, w)
Then multply plm and Homogeneous coordinate
Then normalize it as (x/w, y/w, z/w, w/w)
finally map it into screen space
1 2 3 4 5 6 7 8 9 10 11 12
voidrasterizeTriangle(const Triangle& tri, const float4x4& plm)const{ float4 affine_tri; for (int i = 0; i < 3; i++) { affine_tri = {tri.positions[i].x, tri.positions[i].y, tri.positions[i].z, 1}; affine_tri = mul(plm, affine_tri); affine_tri = {affine_tri.x/affine_tri.w, affine_tri.y/affine_tri.w, affine_tri.z/affine_tri.w, affine_tri.w/affine_tri.w}; int x = (affine_tri.x + 1.0f) * globalWidth / 2.0; int y = (affine_tri.y + 1.0f) * globalHeight / 2.0; FrameBuffer.pixel(x, y) = float3(1.0f); FrameBuffer.valid(x, y); } }
Helper function for mapping ndc into screen space:
1 2 3 4 5 6 7 8 9
float4 ndc2screen(const float4 homoPos)const{ //ndc is -1 to 1 float x = linalg::lerp(0.0f, globalWidth, (homoPos.x + 1.0f) * 0.5f); float y = linalg::lerp(0.0f, globalHeight, (homoPos.y + 1.0f) * 0.5f); float z = linalg::lerp(globalDepthMin, globalDepthMax, (homoPos.z + 1.0f) * 0.5f); float w = homoPos.w; float4 screenPos = {x, y, z, w}; return screenPos; }
Perspective Correct Interpolation
Goal: interpolate some attribute ɸ at vertices (not the phi from the barycentric coords)
We can interpolate texture coordinate.x , or texture coordinate.y
Idea: P := ɸ/w interpolates linearly in 2D
Basic recipe:
Keep the homogeneous coordinate w at each vertex before the division (of course, otherwise w = 1!)
Evaluate W := 1/w and P := ɸ/w at each vertex
Interpolate W and P using standard (2D) barycentric coords
At each pixel, divide interpolated P by interpolated W to get final value
Depth d = z/w can be interpolated linearly (no division by W is needed since we want depth, not z)
So basically:
Interpolated w: w=w1α1+w2α2+w3α3
Interpolated p for each diamension: p=ω1α1⋅ ɸ 1+ω2α2⋅ ɸ 2+w3α3⋅ ɸ 3
pixel value for each diamension = Interpolated p/ Interpolated w
Depth d=α1⋅w1z1+α2⋅w2z2+α3w3z3
where α1,α2,α3 are ϕ(xi),ϕ(xj),ϕ(xk) from barycentric coords
Depth buffering (Z-buffer)
Z-buffer uses interpolated depth d = z/w in 2D
Stores a depth value per pixel
Overwrites by a pixel closer to the camera
Depth d=α1⋅w1z1+α2⋅w2z2+α3w3z3
where α1,α2,α3 are ϕ(xi),ϕ(xj),ϕ(xk) from barycentric coords
Hidden-surface-removal algorithm
1 2 3 4 5 6 7 8 9
Initialize depth buffer to infty During rasterization: for (each triangle T) for (each sample (x,y,depth) in T) if (depth < depthbuffer[x,y]) # // closest sample so far framebuffer[x,y] = rgb; # // update colour depthbuffer[x,y] = depth; # // update depth else # // do nothing, this sample is not closest