Computer graphics Study Guide
Study Guide
📖 Core Concepts
Computer Graphics (CG) – Generation and manipulation of images using computers; includes 2‑D, 3‑D, animation, and visualization.
Pixel – Smallest raster element; stores RGB (or other) color values on a regular 2‑D grid.
Primitive – Basic geometric building block (point, line, triangle, sprite) used to compose scenes.
Rendering – Process that converts a 3‑D scene (geometry, lights, materials) into a 2‑D image via the graphics pipeline.
Shader – Small program (vertex, pixel/fragment) that runs on the GPU to compute per‑vertex or per‑pixel effects (lighting, texture, color).
Ray Casting vs. Ray Tracing – Ray casting finds the first surface hit by a ray; ray tracing follows multiple bounces for realistic lighting.
Texture Mapping – Applying a 2‑D image to a 3‑D surface to add detail.
Bump Mapping – Perturbs surface normals to fake small‑scale relief without changing geometry.
Vector vs. Raster Graphics – Vector: math‑defined shapes, resolution‑independent. Raster: pixel‑based, resolution‑dependent.
GPU (Graphics Processing Unit) – Parallel processor with programmable pipeline for vertex transformation, rasterization, shading.
Real‑Time Ray‑Tracing / AI Upscaling – Hardware cores (RT, Tensor) enable interactive ray tracing and deep‑learning super‑sampling (DLSS/FidelityFX).
📌 Must Remember
Bézier Curves – Foundation for smooth curve modeling (introduced 1960s).
Gouraud Shading – Interpolates vertex colors across polygons.
Blinn‑Phong Shading – Uses halfway vector H = (L + V)/|L + V| for specular term.
Hidden Surface Determination – Algorithm to discard invisible polygons before rasterization.
Keyframe – Stores attribute values at a specific time; interpolation fills gaps.
Normal Mapping – Extends bump mapping by storing per‑pixel normals (1996).
Physically‑Based Rendering (PBR) – Uses multiple texture maps (albedo, metallic, roughness, normal) to approximate real‑world optics.
Ray‑Tracing Cores – Dedicated hardware for tracing rays in real time (Nvidia RTX, AMD RDNA2).
Deep Learning Super Sampling (DLSS) – AI‑driven upscaling that reconstructs high‑res frames from low‑res input.
🔄 Key Processes
3‑D Rendering Pipeline
Vertex Processing: Transform model vertices (world → view → clip) using matrices.
Rasterization: Convert transformed triangles into fragments (potential pixels).
Fragment Shading: Run pixel shader to compute color, apply texture, lighting, bump/normal maps.
Depth Test & Compositing: Resolve visibility, blend with background.
Ray Tracing Workflow
Cast primary rays from camera → intersect scene geometry.
For each hit, spawn secondary rays (shadow, reflection, refraction).
Accumulate light contributions; recurse until depth limit or contribution negligible.
Keyframe Animation
Author key poses at times t₁, t₂, ….
Choose interpolation method (linear, spline).
Generate intermediate frames by evaluating interpolated transform matrices.
AI‑Powered Upscaling (DLSS)
Render scene at lower resolution.
Feed low‑res frame + motion vectors + depth to Tensor core.
Neural network outputs high‑resolution frame with restored detail.
🔍 Key Comparisons
Ray Casting vs. Ray Tracing
Ray Casting: Finds first surface only → fast, no global illumination.
Ray Tracing: Follows multiple bounces → realistic lighting, slower (hardware‑accelerated now).
Gouraud vs. Blinn‑Phong Shading
Gouraud: Interpolates vertex colors → cheap, can miss specular highlights on large polygons.
Blinn‑Phong: Computes per‑pixel specular using halfway vector → more accurate highlights, higher cost.
Bump Mapping vs. Normal Mapping
Bump: Perturbs surface normal using height map derivative (often per‑vertex).
Normal: Stores full per‑pixel normal vectors → finer detail, no geometry change.
Vector vs. Raster Images
Vector: Scales without loss, defined by equations (ideal for logos, fonts).
Raster: Fixed resolution, better for photos, complex textures.
⚠️ Common Misunderstandings
“Ray tracing is only offline.” – Modern GPUs have dedicated RT cores making real‑time ray tracing feasible.
“Shaders are only for lighting.” – Shaders can implement shadows, reflections, post‑process effects, procedural geometry, etc.
“Higher polygon count always equals better quality.” – Without proper LOD, culling, and shading, more polygons can hurt performance with little visual gain.
“Bump mapping changes geometry.” – It only modifies normals; the underlying mesh stays the same.
🧠 Mental Models / Intuition
Pipeline as an Assembly Line: Vertices enter, get transformed, turned into fragments, then painted (shaded) before the final product rolls off.
Ray as a Laser Pointer: Follow its bounce path to see where light would actually travel; each bounce adds a “bounce of truth” to the final color.
Texture as a Sticker: Imagine wrapping a flat image around a 3‑D object; the sticker’s pattern defines surface color/detail.
🚩 Exceptions & Edge Cases
Back‑face Culling: Usually discard polygons facing away, but needed for double‑sided materials (e.g., cloth).
Normal Mapping on Low‑Poly Meshes: If mesh has insufficient geometry, normal map can produce shading artifacts (e.g., stretching).
DLSS at Low Input Resolution: Upscaling quality drops if the source resolution is too low relative to display size.
📍 When to Use Which
Choose Gouraud when performance is critical and surfaces are small/flat.
Choose Blinn‑Phong or PBR for shiny or metallic materials where specular highlights matter.
Use Ray Tracing for reflections, refractions, accurate shadows, or when hardware supports it.
Use Rasterization for the bulk of real‑time scenes where speed outweighs perfect lighting.
Pick Vector Graphics for logos, UI icons, fonts—any artwork that must scale cleanly.
Pick Raster (Pixel) Art for pixel‑perfect retro style or photographic textures.
👀 Patterns to Recognize
“Lighting → Normals → Dot Product” pattern in shading equations.
“Ray → Intersection → Spawn Secondary Rays” pattern in any ray‑tracing algorithm.
“Keyframe → Interpolation → Smooth Curve” pattern in animation timelines.
“Texture → UV Coordinates → Sample” pattern in any texture‑mapping step.
🗂️ Exam Traps
“Ray casting = ray tracing” – Remember the extra bounce steps for full ray tracing.
“Bump map = normal map” – Bump uses height derivatives; normal stores explicit normals.
“Higher resolution always wins” – Without proper anti‑aliasing or upscaling, higher resolution can expose noise.
“All shaders run on the CPU” – Shaders are GPU programs; mixing up execution context leads to wrong performance expectations.
“Vector graphics are always smaller files” – Complex vector scenes with many paths can be larger than a modest raster image.
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or