One of the questions we often get here at Lagoa is what type of render engine we are using; a path tracer or a ray tracer? The answer is that we are a path tracer.
The very next question I had was; “What does that even mean?”
So I set about finding out.
With the help of some of Lagoa’s German experts (I can only imagine them as Tony Stark look-alikes, pointing a giant laser at some fancy diamond), I got my answer. Please, dear readers, allow me to share the discoveries I made during my journey.
Both path and ray tracing work in the reverse of what you would expect; light rays are fired from the camera, to the light source. This is because, if we fired light rays from the light sources, a lot of those rays would never be seen by the camera – bounced off into infinity or just in an opposite direction the camera. Remember, even in the real world, we can only ever see rays that hit our eyes. So in CG, we only want to compute the light rays that pass through the pixels that will make up the final image: both rendering methods do this as an optimization to avoid wasting time.
The final image quality of the render is determined by the Sample Per Pixel or SPP. The more samples per pixel, the more refined the image – so when you leave the browser window open in Lagoa, that’s because the software is iteratively adding more and more light rays to each pixel to create more accurate color values. You can see this happen when you watch the noise in your render gradually disappear, until a point where you get diminishing returns and can no longer tell the difference between an image with 5,000 SPP and 30,000 SPP.
Ray tracing does something similar to refine this, however the sampling is done in discrete passes, as opposed to a constant addition. A ray t raced render has a finite “end”, whereas the longer you leave a path traced render open, the more accurate it will be (up until a point where you get diminishing returns and can no longer tell the difference between an image with 5,000 SPP and 30,000 SPP).
However, this is not the real difference between a path tracer and a ray tracer. The key comes in how lighting values are calculated. I said before that both ray and path tracers fire a light ray from the camera’s location into the scene.
With a path tracer, you can imagine these light rays like a ball from the Katamari Damachi video game; they bounce around the scene (these bounces are calculated as random directional values), picking up all the values they will need to solve the rendering equation; they hit a blue ball (color of surface) with a high reflectivity (energy of ray after hitting surface), with some surface graininess (reflection/refraction), etc etc.
As it continues to bounce around, it bends or splits into multiple light rays depending on the material properties (glass, diamond, etc) that it interacts with. Those rays also bounce around, performing the same function. Eventually, all these rays hit a light source, giving the final piece of the rendering puzzle; the initial amount of energy. The equation is complete and the computer renders the pixel’s final color value based on the sum total of that equation. This is called an integral, which is a fancy mathematical way of saying “we’re adding all these things together”.
To speed things up, these bounces can be optimized by firing a line of site ray from each impact point, to a light source – this cuts out a bit of rendering time because if the light ray intersects, the ray doesn’t necessarily have to keep bouncing about the whole scene.
So that’s path tracing. Now what makes ray tracing different? One of the first things is that the light rays don’t actually physically bounce; light rays are fired from the camera into the scene, but where those light rays intersect, they fire multiple rays to every light in the scene, and then computes the pixel value based on the material properties of the object with the amount of light that pixel is receiving from all the lights in the scene. This means that ray tracing can actually only compute direct lighting. All other effects, such as caustics and global illumination, are based on separate, non-physically based equations.
When a path tracer computes “global illumination,” that GI value is based on actual light bounces. With ray tracing, it is a separate, optimized integral (adding) equation that sums up all the direct lighting in the scene and applies that value across all the pixels in the scene (in addition to direct lighting), adding up whatever material properties are also there, to create pixel color values. In short, it’s a very effective, very beautiful shortcut.
Now, that’s not to say that ray tracing is necessarily better or worse than path tracing. It’s a different way of doing things, and it’s a very efficient method of rendering – which is why it’s so often used in the VFX and product design industries. It streamlines the rendering process, so that your computer doesn’t have to sit for days to render out a nice-looking scene by calculating a bunch of light bounces.
In Lagoa’s case, a physically based path tracer was a better choice; we have a nearly limitless amount of computing power to apply to a scene, so the rendering overhead required for something like path tracing isn’t as much of an issue as it would be on a desktop application. In addition, since our target market is the CAD/design spaces, we place a higher value on physically correct materials and lighting. While a VFX studio can adjust these values so that the object looks real based on arbitrary computer-generated materials and lighting values, a designer or CAD modeler needs to be able to assign an actual material (such as stainless steel or car paint) to an object and have it reflect light in a realistic way.
In summary, path tracing is a more realistic, “brute-force” approach to calculating a lighting solution. Ray tracing is faster and more “efficient”, but it takes shortcuts (global illumination/caustics), that path tracing simply doesn’t. In the end, it’s different solutions for different applications.