Metro Exodus: A Deeper Look at Raytracing
With the launch of the new range of NVIDIA RTX graphics cards we take a deeper look at the technology behind Metro Exodus.
Introduction to Raytracing
Raytracing is the global standard for offline rendering due to its ability to accurately model the physical behaviour of light in the real world, but, due to its computational intensity, it has until now not been viable for use in real-time.
Rays in computer graphics are analogous to individual photons of light travelling continuously in a straight line until they hit a surface where they may be absorbed or reflected. Some of the photons bouncing around an environment will be reflected directly into our eyes, and it is these photons for which we model the paths taken.
A single ray-trace operation on the GPU searches for the shortest straight-line path between the pixel being rendered and some arbitrary polygon elsewhere in the scene. This search can end up potentially landing on any given polygon in the entire scene, and with polygon counts ordering in the millions this can be a very long search indeed. However, the end result is that we gain access to the full and accurate history of that photon from when it left the polygon, to when it was reflected into the camera.
Every so often gaming graphics undergoes a revolution where consumer hardware gets to a such a level of sophistication that we can throw off the old practices designed to do the best with what was available at the time and replace them with something better. Hardware T&L in the 90s allowed games to properly move into the 3D era, and programmable shaders paved the way for physically based rendering in the 00s.
Now, in the new cards we are seeing a wealth of new features. There are specialized subsystems for quickly performing billions of geometry intersection tests. They have fast memory to handle the very high access rates required for randomly searching large scenes. And, they are capable of full async compute to allow this to be done in parallel with the rasterization pipeline. These features are allowing us to replace the existing paradigms with ones that have fewer limitations, are more advanced, and are more representative of reality. So, for us, it is an exciting time to be working with this right from the very beginning.
The very first feature we are choosing to implement with this new technology is a replacement for our global illumination pipeline.
Global Illumination (GI) aims to model how soft, diffuse light bounces around a scene creating complex lighting interactions far beyond that of simple directional light sources. In reality, lights do not just simply illuminate the surfaces they are shone upon. Rays of light scatter in every direction carrying information about materials and light intensity to the surfaces all around them, including those otherwise in darkness. Attempts to approximate this using textures or volumetric methods suffer from short range and low fidelity, but by actually modelling the light rays directly we can achieve so much more.
Conventional GI systems are actually the product of a number of subsystems working together to create a final image. Image Based Lights (IBLs), ambient light, virtual point lights, and radiance hint volumes are all tools to add extra light into parts of the scene that are in shadow or out of direct line of sight of a physical light source. Screen Space Ambient Occlusion (SSAO) and shadow map softening filters then remove light from the image to approximate some percentage of rays being occluded from a surface. It would be much better to actually perform the brute force tests required to see how many rays really made it to that point. That is exactly what raytracing does, with the added benefit of bringing all of those techniques together in one package.
With Raytraced Global Illumination (RTGI) enabled, surfaces can adapt automatically to animated geometry and dynamic lighting environments. Surfaces accumulate light naturally based on which parts of the scene they have a direct line of sight to, and shade appropriately based on those that they do not.
Thanks to the power of RTX technology we can simplify and improve upon techniques from previous generations. We can combine disparate rendering technologies into a single unified algorithm, which benefits from access to a wider range of scene information and greater physical accuracy. It enables the creation of more realistic and believable environments and will ultimately lead to shorter development times and grander, richer gaming experiences.