The Rendering Equation
Virtually all modern GI renderers are based on the rendering equation introduced by James T. Kajiya in his 1986 paper "The Rendering Equation". This equation describes how light is propagated throughout a scene. In his paper, Kajiya also proposed a method for computing an image based on the rendering equation using a Monte Carlo method called path tracing.
It should be noted that the equation has been known long before that in engineering and has been used for computing radiative heat transfer in different environments. However, Kajiya was the first to apply this equation to computer graphics.
It should also be noted that the rendering equation is only "an approximation of Maxwell's equation for electromagnetics". It does not attempt to model all optical phenomena. It is only based on geometric optics and therefore cannot simulate things like diffraction, interference or polarization. However, it can be easily modified to account for wavelength-dependent effects like dispersion.
Another, more philosophical point to make, is that the rendering equation is derived from a mathematical modelof how light behaves. While it is a very good model for the purposes of computer graphics, it does not describe exactly how light behaves in the real world. For example, the rendering equation assumes that light rays are infinitesimally thin and that the speed of light is infinite - neither of these assumptions is true in the real physical world.
Because the rendering equation is based on geometric optics, raytracing is a very convenient way to solve the rendering equation. Indeed, most renderers that solve the rendering equation are based on raytracing.
Different formulations of the rendering equation are possible, but the one proposed by Kajiya looks like this:
L(x, x1) is related to the light passing from point x1 to point x;
g(x, x1) is a geometry (or visibility term);
e(x, x1) is the intensity of emitted light from point x1 towards point x;
r(x, x1, x2) is related to the light scattered from point x2 to point x through point x1;
S is the union of all surfaces in the scene and x, x1 and x2 are points from S.
What the equation means: the light arriving at a given point x in the scene from another point x1 is the sum of the light emitted from all other points x2 towards x1 and reflected towards x:
Except for very simple cases, the rendering equation cannot be solved exactly in a finite amount of time on a computer. However, we can get as close as we want to the real solution - given enough time. The search for global illumination algorithms has been a quest for finding solutions that are reasonably close, for a reasonable amount of time.
The rendering equation is only one. Different renderers only apply different methods for solving it. If any two renderers solve this equation accurately enough, then they should generate the same image for the same scene. This is very well in theory, but in practice renderers often truncate or alter parts of the rendering equation, which may lead to different results.
As noted above, we cannot solve the equation exactly - there is always some error, although it can be made very small. In some rendering methods, the desired error is specified in advance by the user and it determines the accuracy of the calculations (f.e. GI sample density, or GI rays, or number of photons etc.). A disadvantage of these methods is that the user must wait for the whole calculation process to complete before the result can be used. Another disadvantage is that it may take a lot of trials and errors to find settings that produce adequate quality in a given time frame. However, the big advantage of these methods is that they can be very efficient within the specified accuracy bounds, because the algorithm can concentrate on solving difficult parts of the rendering equation separately (e.g. splitting the image into independent regions, performing several calculation phases etc.), and then combining the result.
In other methods, the image is calculated progressively - in the beginning the error is large, but gets smaller as the algorithm performs additional calculations. At any one point of time, we have the partial result for the whole image. So, we can terminate the calculation and use the intermediate result.
Exact (Unbiased or Brute-Force) Methods.
Produce very accurate results.
The only artifact these methods produce is noise.
Renderers using exact methods typically have only few controls for specifying image quality.
Typically require very little additional memory.
Unbiased methods are not adaptive and so are extremely slow for a noiseless image.
Some effects cannot be computed at all by an exact method (for example, caustics from a point light seen through a perfect mirror).
It may be difficult to impose a quality requirement on these methods.
Exact methods typically operate directly on the final image; the GI solution cannot be saved and re-used in any way.
Path tracing (brute-force GI in some renderers).
Bi-directional path tracing.
Metropolis light transport.
Approximate (Biased) Methods:
Adaptive, so typically those are a lot faster than exact methods.
Can compute some effects that are impossible for an exact method (e.g. caustics from a point light seen through a perfect mirror).
Quality requirements may be set and the solution can be refined until those requirements are met.
For some approximate methods, the GI solution can be saved and re-used.
Results may not be entirely accurate (e.g. may be blurry) although typically the error can be made as small as necessary.
Artifacts are possible (e.g. light leaks under thin walls etc).
More settings for quality control.
Some approximate methods may require (a lot of) additional memory.
Light cache in V-Ray.
Hybrid Methods: Exact Methods Used for Some Effects, Approximate Methods for Others.
Combine both speed and quality.
May be more complicated to set up.
Final gathering with Min/Max radius 0/0 + photon mapping in mental ray.
Brute force GI + photon mapping or light cache in V-Ray.
Light tracer with Min/Max rate 0/0 + radiosity in 3ds Max.
Some methods can be asymptotically unbiased - that is, they start with some bias initially, but it is gradually decreased as the calculation progresses.
These start from the lights and distribute light energy throughout the scene. Note that shooting methods can be either exact or approximate.
Can easily simulate some specific light effects like caustics.
They don't take into consideration the camera view; thus they might spend a lot of time for parts of the scene that are not visible or do not contribute to the image (e.g. caustics that are not visible - they must still be computed).
Produce more precise solutions for portions of the scene that are close to lights; regions that are far from light sources may be computed with insufficient precision.
Cannot simulate efficiently all kinds of light effects e.g. object lights and environment lights (skylight); non-physical light sources are difficult to simulate.
photon mapping (approximate).
particle tracing (approximate).
light tracing (exact).
some radiosity methods (approximate).
These start from the camera and/or the scene geometry. Note that gathering methods can be either exact or approximate.
They work based on which parts of the scene we are interested in; therefore, they can be more efficient than shooting methods.
Can produce a very precise solution for all visible parts of the image.
Can simulate various light effects (object and environment lights), non-physical lights.
Some light effects (caustics from point lights or small area lights) are difficult or impossible to simulate.
Path tracing (exact)
Irradiance caching (final gathering in mental ray), (approximate).
Some radiosity methods (approximate).
These combine shooting and gathering; again, hybrid methods can be either exact or approximate.
Can simulate nearly all kinds of light effects
May be difficult to implement and/or set up.
Final gathering + photon mapping in mental ray (approximate).
Irradiance map/brute force GI + photon map in V-Ray (approximate).
Bi-directional path tracing and metropolis light transport (exact).
Some radiosity methods (approximate).
Some approximate methods allow the caching of the GI solution. The cache can be either view-dependent or view-independent.
Shooting methods typically produce a view-independent solution.
The solution is typically of low quality (blurry and lacking details). Detailed solution requires a lot of time and/or memory.
Adaptive solutions are difficult to produce.
Regions that are far from light sources may be computed with insufficient accuracy.
Some radiosity methods
Gathering methods and some hybrid methods allow for both view-dependent and view-independent solutions.
Only the relevant parts of the scene are taken into consideration (no time is wasted on regions that are not visible).
Can work with any kind of geometry (i.e. no restriction on geometry type).
Can produce very high-quality results (keeping all the fine details).
In some methods, view-dependent portions of the solution can be cached as well (glossy reflections, refractions etc).
Require less memory than a view-independent solution.
Requires updating for different camera positions; still, in some implementations portions of the solution may be re-used.
Irradiance caching (in V-Ray, mental ray, finalRender, Brazil r/s, 3ds Max's light tracer).
Solution needs to be computed only once.
All of the scene geometry must be considered, even though some of it may never be visible.
The type of geometry in the scene is usually restricted to trianglular or quadrangular meshes (no procedural or infinite geometry allowed).
Detailed solutions require lots of memory.
Only the diffuse portion of the solution can be cached; view-dependent portions (glossy reflections) must still be computed.
Some radiosity methods.
Different combinations of view-dependent and view-independent techniques can be combined.
Photon mapping and irradiance caching in V-Ray.
Photon mapping and final gathering in mental ray.
Radiosity and light tracer in 3ds Max.
V-Ray supports a number of different methods for solving the GI equation - exact, approximate, shooting and gathering. Some methods are more suitable for some specific types of scenes.
V-Ray supports two exact methods for calculating the rendering equation: brute force GI and progressive path tracing. The difference between the two is that brute force GI works with traditional image construction algorithms (bucket rendering) and is adaptive, whereas path tracing refines the whole image at once and does not perform any adaptation.
All other methods used by V-Ray (irradiance map, light cache, photon map) are approximate methods.
The photon map is the only shooting method in V-Ray. Caustics can also be computed with photon mapping, in combination with a gathering method.
All other methods in V-Ray (brute force GI, irradiance map, light cache) are gathering methods.
V-Ray can use different GI engines for primary and secondary bounces, which allows you to combine exact and approximate, shooting and gathering algorithms, depending on what is your goal. Some of the possible combinations are demonstrated on below.
Example: Comparisons of Different GI Methods
Here is a scene rendered with different GI algorithms in V-Ray. Combining the different GI engines allows great flexibility in balancing time versus quality of the final image.
Brute force GI, 4 bounces.
The image is darker because only 4 light bounces are computed.
Notice the grain and the long render time.
Render time: 25m 3.9s
Irradiance map + brute force GI, 4 bounces.
The image is darker because only 4 light bounces are computed.
The grain is gone, although the GI is a little blurry.
(see the GI caustics below the glass sphere).
Render time: 5m 56.8s
Light cache only.
Very fast, but shadows are blurry
( Store direct light for the light map is on).
Render time: 0m 20.2s
Light cache and direct lighting
( Store direct light is off).
Render time: 0m 34.8s
Brute force GI + light cache.
There is some grain in the GI but is a lot faster than brute force GI alone.
Render time: 5m 36.0s
Irradiance map + light cache;
probably the best quality/speed ratio.
Render time: 1m 39s
Photon map only.
Notice the caustics from the glass sphere, as well as the dark corners.
Render time: 0m 46.2s
Photon map and direct lighting.
Render time: 1m 2.1s
Photon map with precomputed irradiance only.
Splotchy, but faster than a raw photon map.
Render time: 0m 38.5s
Irradiance map + photon map.
Notice the dark corners and incorrect shading on the letters.
Render time: 3m 23.5s
Irradiance map + photon map with retracing of corners.
Corners are better although still a little dark.
Render time: 7m 49.2s
Irradiance map + photon map with precomputed irradiance, with corner retracing.
Render time: 5m 21.0s
Irradiance map + light cache with GI caustics enabled.
Notice the slowdown due to the caustics.
Render time: 2m 17.6s
Light cache in Progressive path tracing Mode with photon-mapped caustics.
Render time is quite high.
Render time: 1h 44m 15.4s
Example: GI Caustics
This example shows GI caustics generated by a self-illuminated object
Example: Light Bounces
This example shows the effect of the number of light bounces on an image:
Direct lighting only: GI is off.
1 bounce: irradiance map, no secondary GI engine.
2 bounces: irradiance map + brute force GI with 1 secondary bounce.
4 bounces: irradiance map + brute force GI with 3 secondary bounces
8 bounces: irradiance map + brute force GI with 7 secondary bounces
Unlimited bounces (complete diffuse lighting solution): irradiance map + light cache
Example: Light Bounces
This example demonstrates the effect of the global ambient occlusion options.
The first image to the right is rendered with the Light cache for both primary and secondary bounces, Fixed Filter type for the light cache, and Store direct light off. The second image in the center is rendered with the same light cache settings, but with global ambient occlusion enabled. The third image to the right is rendered without ambient occlusion, with Brute force GI engine for primary bounces, and the Light cache as a secondary engine with Nearest Filter type. The render times include the time for calculating the light cache. Note how ambient occlusion can produce a feeling of a more detailed image, even though the result is not entirely correct.
Ambient occlusion is off - lighting is good, but there is a lack of detail.
Ambient occlusion is on - details are much more defined.
Brute force GI, no ambient occlusion - details are fine, but render times are longer.