Introduction

Today, we're going to start talking about Ray Tracing - but first, let's talk about where we're going with this. So far in the course, we've talked about the Rasterization Pipeline - we talked about rasterization, transforms, texture mapping, etc. After that, we had a module on Geometry, focusing on smooth curves & surfaces as well as geometry processing. Now, we're at the bridge between Geometric Modeling and Lighting & Materials — and physically-based rendering. That bridge goes through Ray Tracing.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/3d36f129-ab6c-43d7-8b46-4d552e76210a/Untitled.png

Before we dive into ray tracing, let's take a look at this rendering again (in our bridge towards moving to photorealistic rendering).

1. Notice the glossy mug - if we look closely, in the white enamel, there's a reflection of the custard tart and part of a coffee cup.
2. Notice the shadow underneath the spoon - we haven't talked about how to do shadows yet!
3. Notice the transparency in the glass (and the tea) - there's a sense of translucency in those objects.

  1. Notice the glossy mug - if we look closely, in the white enamel, there's a reflection of the custard tart and part of a coffee cup.
  2. Notice the shadow underneath the spoon - we haven't talked about how to do shadows yet!
  3. Notice the transparency in the glass (and the tea) - there's a sense of translucency in those objects.

Basic Ray-Tracing Algorithm: Ray Casting

Let's start with a basic ray tracing algorithm. Like many great algorithms (lot of depth & application), this algorithm was thought of early on - but how to do this core algorithm, efficiently, at scale, continues to be a core research topic today.

In 1968, Appel introduced the idea of Ray Casting, where we can try to generate an image by casting one ray per pixel at a time. For every pixel in the image plane (the glass window), we can trace a ray from the eye point through the plane to the object, and see where it intersects the geometry. We can also reason, based on the ray from the light source to the geometry itself, what pixels are illuminated and what pixels are shadowed.

In 1968, Appel introduced the idea of Ray Casting, where we can try to generate an image by casting one ray per pixel at a time. For every pixel in the image plane (the glass window), we can trace a ray from the eye point through the plane to the object, and see where it intersects the geometry. We can also reason, based on the ray from the light source to the geometry itself, what pixels are illuminated and what pixels are shadowed.

Here's a different way of looking at this. Perhaps we trace a ray from an eye point through an image plane to a glass sphere. We can trace the eye ray (eye ⇒ pixel) and identify the closest scene intersection point. Once we do that, we want to identify the color (and shading) - and we can do that by looking at the normal & contribute the contribution of the light source (for shading purposes) using a reflection model (e.g. Blinn Phong model).

Here's a different way of looking at this. Perhaps we trace a ray from an eye point through an image plane to a glass sphere. We can trace the eye ray (eye ⇒ pixel) and identify the closest scene intersection point. Once we do that, we want to identify the color (and shading) - and we can do that by looking at the normal & contribute the contribution of the light source (for shading purposes) using a reflection model (e.g. Blinn Phong model).

We can also handle specular reflection and specular refraction using Snell's law (and other physics-based models). In this case, we have primary rays (eye point ⇒ object), secondary rays (object ⇒ reflection/refraction rays), and shadow rays (rays that test whether light arrives to certain points in the scene).

We can also handle specular reflection and specular refraction using Snell's law (and other physics-based models). In this case, we have primary rays (eye point ⇒ object), secondary rays (object ⇒ reflection/refraction rays), and shadow rays (rays that test whether light arrives to certain points in the scene).

To summarize our preliminary ray tracing model: we start with primary rays, which are rays that go from our eye point through a pixel on the image plane and intersect some geometry in our scene. We then trace secondary rays until we hit a non-specular surface, or reach the maximum desired recursion level. At each hit point, we trace shadow rays to our light source(s) to test light visibility. The final pixel color is the weighted sum of the contributions along rays. This allows us to have more sophisticated effects (e.g. specular reflection, refraction, and shadows) — but there's much more we can do to derive a physically-based illumination model.

Ray-Triangle Intersection Test

Let's now shift into looking at the core task of taking a ray and intersecting it with geometry. There are many types of geometry - smooth surfaces, subdivision surfaces, algebraic surfaces, etc. — but for the purposes of this algorithm, we're going to look specifically at triangle/polygon meshes. This ray-surface intersection test is useful for rendering (visibility, shadows, lighting, etc.) and geometry (inside/outside test).

To compute this, we just intersect the ray with each triangle. This is a simple, but slow, method. In this case, we can have zero, one, or multiple intersections.

Let's start by talking about how we define a ray, mathematically. A ray is defined by its origin and a direction vector.

Here, we have the equation for a ray. We can put this representation into parametric form.

Here, we have the equation for a ray. We can put this representation into parametric form.

How do we intersect a ray with a triangle? Let's start by considering how we intersect this ray with a plane. A plane is defined by a normal vector and a point on the plane.