3D graphics are different from 2D pixel-based graphics in that 3D is based on models created and manipulated by using mathematics (geometry)
good for shiny, etc; simulates light travelling FROM the eye to the light source
math intensive, slow, but good for cloth
Uses two buffers: a front and a back
From Real-Time Rendering 2nd Ed.
Pipeline diagram from lecture notesComputer program provides an input model of vertices
Here all the primitives are rasterized (converted to pixels)
Additional per-pixel operations are performed here
Note: geometric transformations are applied via matrix multiplication, and matrix multiplication is not commutative, so order is important. Order is like this:
$M_{final} = TransN\ x\ ...\ x\ Trans3\ x\ Trans2\ x\ Trans1$
Transforms the model into the world space
$\begin{pmatrix} x & 0 & 0 & 0 \\ 0 & y & 0 & 0 \\ 0 & 0 & z & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$
$\begin{pmatrix} cos(\theta) & -sin(\theta) & 0 & 0 \\ sin(\theta) & cos(\theta) & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$
$\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & cos(\theta) & -sin(\theta) & 0 \\ 0 & sin(\theta) & cos(\theta) & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$
$\begin{pmatrix} cos(\theta) & 0 & sin(\theta) & 0 \\ 0 & 1 & 0 & 0 \\ -sin(\theta) & 0 & cos(\theta) & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}$
$\begin{pmatrix} 1 & 0 & 0 & x \\ 0 & 1 & 0 & y \\ 0 & 0 & 1 & z \\ 0 & 0 & 0 & 1 \end{pmatrix}$
Redefines world objects to camera objects; converts cam coords to world coords (camera eye aka viewing coordinates)
Note: OpenGL's gluLookAt() requires that the up-vector parameter must not be parallel to the direction that the camera is pointed in. If that happens the view matrix breaks.
Building a viewing transformation matrix:
$Pcamera = R \cdot T$ (this is the viewing transformation)
Changes the view volume (like adjusting the camera lens)
Lighting is is calculating the luminous intensity at a particular point, while shading is assigning colors to pixels
Direct illumination is light that came directly from a light source, while indirect is light that has bounced off another object before reaching the given object
Consists of 4 vectors: light source, viewer, normal, and perfect reflector
There are 3 components for each of RGB:
Assigning colors to pixels
Exam: what does "alpha" control in specular highlight EQ? size of the specular highlight (larger->metal-like)
If multiple light sources, simply add all the terms
Polygons are very unstructured though, so harder to work with (than say, an ellipse)
Polygon tables: like .obj file, list of polygons, edges, vertices, etc
Graph of transformations applied to the leaf nodes (the objects)
Use boolean expressions and objects as operands, another object is the result of the operation
Area is partitioned into uniform grid; each space is occupied by a voxel
Useful for soft contours, often muscles
Good for making realistic looking natural objects (mountains, plants, clouds, etc)
From Angel notes: images and geometry flow through separate pipelines that join during fragment processing
Aka reflection mapping; uses a picture of an area (environment) to be reflected from a surface. Good for simulating highly reflective / mirror-like surfaces
Provides the appearance of a bumpy surface. Since its only a simulated set of bumps and the vertices aren't actually moved, the edges of the object will appear flat
Create a 3x3 grid of "outcodes", where the centre is the viewport which has a code of 0000
Process "scan lines" one at a time from left to right to determine:
Essentially, for each edge in the clip window, you output vertices that are on the "in" side of that edge. After calculating output vertices for an edge, redraw the polygon with only those vertices. So when you finish the last edge, the output vertices will be the final clipped polygon.
Algorithm for finding the edges
Used primarily for graphics, but also support other applications such as math processing, artificial intelligence, and audio processing
CPU trends to be serial vs GPU in parallel
Shaders are high-level language (GLSL, etc) programs that are injected into the (formerly) fixed graphics pipeline at various stages. Popular ones are vertex shaders and fragment shaders.
Shaders override functionality that was previously fixed in the pipeline