Notch Adds Path Tracing for Near Photo-Realistic Renders in Motion Graphics
Notch engine for interactive motion graphics now has a new version of the Notch Builder authoring tool, adding full path tracing functionality to the software. The update expands on its real-time rendering and ray tracing abilities for motion graphics artists and designers.
Integrated Path Tracing
Path tracing, a technique used in many renderers, simulates the paths of real light including the way light bounces, reflects and refracts through a 3D scene. It also takes into account the materials it interacts with, making near photo-realistic results possible. Path tracing is a very effective approximation of global illumination, the real-world combination of all of these lighting effects that make scenes and surfaces look real and distinctive to people.
Global illumination is almost impossible to solve or approximate in software tools because the ways light can interact with a scene are infinitely varied. Path Tracing is a way of solving global illumination by breaking it down into many individual paths - tracing a single ray from the camera into the scene, hitting and bouncing off surfaces multiple times, and eventually hitting a light source. Adding up the result of these paths produces an image. If the scene has been built correctly, this way of simulating light can produce photo-realistic render.
The new GPU path tracer in Notch Builder is integrated directly into the existing Notch toolkit as a drop-in component - path tracing can be applied to scenes and toggled on and off when needed. It becomes part of the node-based, real-time environment including particles, procedural effects, positioned cloners, lights, post-processing and compositing.
The R&D team’s aim is that the increase in render quality when using the new path tracer node, combining their own full GPU path tracing renderer to the Notch tools, will make Notch viable for new use-cases.
Denoising is an important component in path tracing workflows. Notch has various options, including its in-house real-time denoiser, NVIDIA's OptiX RTX powered GPU denoiser and the Intel OIDN CPU denoiser.
Denoising techniques overcome the fact that soft, fuzzy, blurry or smooth looks are difficult to produce with ray tracing without either casting many more rays in slightly different directions and averaging out the results, which slows your process down significantly, or coping with lots of noise. Denoising techniques take the results of a limited number of rays and apply filters or other methods to achieve a smoother result that would have taken many more rays to achieve. Notch has two denoising nodes - the RT Refinement node and the RT Real-Time Denoising node.
The Refinement node, useful for editing interactively, renders the frame over and over again, using different ray directions each time and averages the results. Because no filtering is used, the render is accurate. The RT Real-Time Denoising node does use filters, combining a spatial filter with a temporal filter and reusing information from previous frames to build up a smooth result over a few frames, even when only a few rays are cast per frame. This method may create challenges with glossy surfaces but is useful when interactive frame rates are required.
The RT Refinement node comes with an AI-based option for denoising. Notch supports Intel’s CPU-based Open Image Denoiser (OIDN) and the GPU-based OptiX denoiser from NVIDIA.
Instead of a graphic approach these tools use machine learning to pre-train an AI engine on a series of images that show a noisy version of a render after a number of iterations, and the fully refined version of that render – that is, teaching the engine how to start with a noisy, unrefined image and produce a refined version.
OIDN and OptiX are both very useful - they can produce good results with a single refinement pass but they work best when used together with some refinement passes. Also remember that AI denoising is applied on a frame by frame basis. No temporal information is fed back into the process and each frame is evaluated independently.
NVIDIA RTX GPU – Learning to Upscale
This Notch Builder release also includes an AI image upscaler for images and video, which has been added in partnership with NVIDIA and is powered by their new RTX GPUs. It also uses machine learning, in this case, accelerated by the Tensor Cores in the RTX GPUs in order to upscale an image by a factor of 2x, 4x or 8x.
Working in a similar way to AI Denoising by pre-training an AI engine on a data set of images, each one rendered at different resolutions, the engine learns how to recreate the higher resolution version from the lower resolution image - filling in the gaps as necessary and keeping sharp lines and details sharp throughout.
Potential applications of a high-quality upscaler range from being able to display old stock or project content created at a lower resolution on a super high-resolution LED screen, to producing widescreen content that you do not have time to render at full resolution. Rendering at a lower resolution and then upscaling can save time and still preserve quality.
The AI Upscaler has been integrated in two ways into Notch for video and both need the most recent NVIDIA drivers and an RTX NVIDIA GPU to work. Applying machine learning to content creation is still in early stages and depends on how the AI engine was trained - content that is more similar to the training data set is, of course, likely to show a better outcome. www.notch.one