Intel Released Open Image Denoise
Denoise is very important for our CG practitioners. Denoise is an important part of post-processing, which can eliminate the visual noise caused by insufficient ray tracing calculation. And this part is also the most time-consuming part, and the quality of the noise reduction also directly affects the quality of the output image and sequence.
In an ideal world, the light should be projected directly onto every pixel of the screen, but in the real world, computing is not as advanced enough to do this in a reasonable/real-time state. The purpose of noise reduction is to try to correct and reconstruct such images.
Recently, for the denoise problem, Intel introduced Open Image Denoise "Open Image Noise Reduction" (abbreviated as OIDN) open source technology, which is a high performance & high-quality noise reduction library for ray tracing rendered images, let The CPU helps with some of the work of noise reduction.
Open Image Denoise provides users with a high-quality, high-efficiency and easy-to-use denoise method that significantly reduces the rendering time of rendering software for ray tracing.
This denoise library can filter Monte Carlo noise inherent in “path tracking” or any other tracking method, and can even reduce the necessary sampling amount per pixel according to multiple orders of magnitude, depending on what you want to achieve. Target performance criteria.
At present, Intel has not released specific technical details, but Open Image Denoise is built on the Intel Mathematical Core Library (MKL-DNN) for deep neural networks. It requires CPU support for Intel 64 architecture and SSE4.2 instruction set. Modern instruction sets such as SSE4, AVX2 and AVX-512 are accelerated to achieve higher denoise performance.
It runs nodes on laptops, workstations, and HPC systems. Not only can it run efficiently during offline rendering, but it can also perform interactive ray tracing based on the hardware device used.
Let's take a look at a few renderings that use Open Image Denoise, as well as the few shown above.
Nvidia uses the GPU for deep learning to predict the final rendered image from the partially completed results. The resulting AI solution can denoise in a fraction of the time, infinitely close to the input image, compared to the existing method. Produce high-quality images.
The NVIDIA team also used AI to solve the jagged problem in-game rendering, trained the neural network to recognize artefacts, and replaced the original position with smooth anti-aliased pixels, resulting in a clearer image.
One is based on CPU, one is based on GPU, which one is more cattle? At present, there is no way to get two technical protests. However, from the effect of each display, the effect is good. Maybe only in terms of speed! ~
How to Render High-quality Images in Blender
How to render large scenes with Redshift in Cinema 4D
Arnold\ V-Ray\ Octane\ Conora\ RedShift, Which Is The Best 3D Renderer?
Why V-Ray Render is Black in SketchUp And How To Fix it？
How the Redshift Proxy Renders the Subdivision
How to Reduce Noise in Corona Renderer - Corona Denoising (2022 Updated)
What is the difference between pre-rendering and real-time rendering?
Blender vs Maya vs Cinema 4D, Which One Is The Best 3D Software？
Previous: The Wandering Earth China Box Office Market Broke 3 Billion In 10 Days
Next: Unity Plays VR Short Film Like This: Crow: The Legend