Intel release open source image denoiser
Intel has released a new open source image denoiser called, appropriately, Open Image Denoise under an Apache 2.0 license. It comprises a several “high-performance, high-quality denoising filters for images rendered with ray tracing.”
According to the new site, the algorithm “filters out the Monte Carlo noise inherent to stochastic ray tracing methods like path tracing, reducing the amount of necessary samples per pixel by even multiple orders of magnitude”.
The tool is compatible with Intel 64 architecture based CPUs and compatible architectures. Intel claims it is efficient enough to be suitable not only for offline rendering but depending on the hardware used, for interactive ray tracing as well. Find out more and download source code and binaries on Github.
This is pretty awesome, it seems to be far better than the Optix denoiser, and the cool thing is that it´s not a platform limited denoiser… it works under any CPU! Intel or AMD
AFAIK there’s no confirmation as to how well it runs on AMD processors though it’s known that it isn’t hardware-specific.
Well, I have confirmation because I´ve seen it working in a Threadripper 1950X in real time inside Blender Cycles, a dev did an experimental implementation and it´s astonishing, far better than the native Cycles denoiser and far better than Optix 🙂
BTW this does not mean that the implementaiton will go public with the real time version becuase it has it´s caveats, but it´s amazing 🙂
That’s cool b/c my rig is based off of an AMD CPU (not a threadripper though) I hope Ton is already eyeing it to implement in 2.8 or maybe in 2.81 Fingers crossed
I hope it too 🙂
good, because from what i’ve seen the optix denoiser is not that good anyway
Hard to tell from the images on the site but the denoised images look a little blurred or soft.I’m not complaining, its free which is awesome, just an first impressions observation. Would have been good to have comparisons with a render that has been left to run with more samples and the denoised image.
Nvidia Optix is worst.
Not interested until it supports animations. For now all denoisers are amazing with stills, but as soon as you start using them for animations you get flickering bigger than with GI 15 years ago 😉
Pixar and Disney successfully use in-house Ai denoising. Sooner or later we’ll get that too.
We’ve been using VRay’s denoiser for animations for awhile now. Just set the vrimg up manually to sample before and after frames if you get any flickering. Heck, I’ve been using it without doing that for most simple animations with little issues. Most of our stuff is not in enclosed environments, mostly product on floors, I imagine rooms and other environments would have more issues.
Yeah, unfortunately 🙁 We’ve been using NeatVideo denoiser – it’s really good but you still need to start with a render of 10-20k samples to get a nice, clean denoised image. To be honest that has been our workflow since iRay, so basically the very beginning of GPU rendering, and nothing has changed since then.
I was so happy when I saw first AI denoiser demos, and quite honestly, when working with still frames you can get away with 500-1000 samples. Hell we send preview renders for clients with only 20 samples and they never complain. But with animation it’s a whole different story – I don’t think the AI denoisers have changed our workflow for animations at all :/
I was in the iray point too, but when Corona released their denoiser it was pretty awesome and we were able to reduce the amount of samples a lot, Blender denoiser is a bit worse than corona, but it is a tremendous help, specially with picuture over 4k, we can render those images with a very low amount of samples and the denoiser works flawlessly.
Optix was a bit better than Blender one, but it depends on the situation, the intel one seems to be far far better than any other denoiser I saw, but the option you are choosing, the NeatVideo, right now is the worst option because it does not take into account anything like the albedo, normals or other channels that will help avoid converting your image in a bunch of clouds, you should try optix if you can.
And regarding cross frame denoising, the Blender denoiser has cross frame denoising, it is just not implemented in the UI right now, but it can be used with cycles standalone after having rendered the pictures, it´s just that it cannot be used while rendering, it has to be used after the full animation has been rendered (to have before and after information to avoid flicker), and I think this is also de idea with other denoisers, but if you reach a certain point of sampling there is no flicker even without corss frame denoising, it was that way with the corona one too, and I suspect is the same situation for the majority of render engine implemented denoisers.
Cheers!
I test Optix with pretty much every Nvidia build, still not usable for animation. Why do you say NeatVideo is the worst way? From what I gather it’s an industry standard right now, and it has been for a couple of years. No matter who you ask, what GPU renderer they use, odds are they use NeatVideo on their renders. Sure it doesn’t use the other render passes, but why would it? It’s amazing for degraining real footage where there are no additional passes. I know the GPU denoisers use them, because they can, but if I’m still getting better results with NV, they are either not using them right or the passes are actually useless in denoising. Just remember we’re talking about animations, the passes are helpful on single frame renders, but when you’re dealing with an animation, most of the additional data you can just calculate from interframe deltas and that’s what NV is doing. They just have different approaches, but I don’t really care, since the end result is all I’m after. and right now NV is the one to beat :/
You say that Corona has a great denoiser – yeah, assuming you’re looking at the beauty render. but try using any of the render passes in comp, it quickly falls apart and at some point, at some number of passes, you actually get a better result denoising the passes with NV than using the Corona denoiser.
The best part is, I see people lowering standards right now, using the GPU denoisers, they produce footage that just flickers or is bloppy. 10 years ago no one would even think of treating a render like that as the final render, you tweaked the GI, baked some stuff if you had to, until it didn’t flicker at all. Now fast forward 10 years, better software, faster machines, and the quality dropped – I just can’t believe it :/
I agree with you with the fact that NeatVideo is the best denoiser for footage, but I disagree in that it is the best for 3D.
What I mean is that you need a cross-frame denoiser for a 3d animation, the “still” denoiser, or per-frame denoiser does not works, the thing is that cross-frame denoisers are not very well known, they should be used AFTER the render has been done, as a post process, and they use all the passes they need, albedo, normal, and whatever other pass they need, but the denoiser need´s to be prepared to take into account the previous and following frame at least.
Cycles denoiser has a cross-frame feature, but it´s not implemented in UI, so it´s hard to use and unknown, but I assure you that any cross-frame denoiser with passes will beat up NeatVideo, for the simple reason that it has much more information about what´s going on than NeanVideo, I think the Renderman denoiser has also a cross-frame denoiser feature.
Corona denoiser was not bad, specially if you lock up the noise pattern, in Blender if you do that you avoid flickering, but you will still suffer from “clouds”, there is a feature that is going to be implemented soon, Dithered Sobol, a different noise distirbution patter based on blue noise, it helps A LOT to avoid cloudiness in denoised pictures, so this mixed with a fixed pattern along time should avoid flickering a lot, but the proper way to avoid it is to have a cross-frame denoiser, and I think the Intel one was prepared or was going to be prepared, I´m not sure about this.
BTW regarding quality, it does has not dropped for everyone, you have to keep in mind that there are a lot more users than before, and some users don´t care about quality, but for others quality has not dropped, there are productions out there that are noisy as hell and you would not notice at all because they are properly denoised, and I´m not referring just to Pixar level productions, all of our reel that is not real time is denoised, and there may be some places were you can notice a tiny bit of flicker, but in general I´m pretty happy and the quality is far better than the one delivered some years ago.
Bear in mind that our render times jumped from 2 hours per frame to 10 minutes per frame… that is a huge improvement if we can at least maintain the same quality level 🙂
Cheers.
Because artists realize that a little bit of noise/grain is not bad at all…
I totally agree wtih you, that is why for animation sometimes we mix a bit of the noisy frame with the denoised one, but for that noise to be something visually appelaing has to be animated noise (with different seed for each frame) so it looks more like film grain, but also to avoid flickering with denoisers you need fixed noise (with the same seed for each frame) and those two factors are not compatible… it´s a pity but it is what it is.
The problem comes when artists render at 10 samples and expect the denoiser to do all the hard work… this is not going to happen… right now.
Cheers!
“but also to avoid flickering with denoisers you need fixed noise (with the same seed for each frame)”
If you use a temporal denoiser, like neatvideo, you won’t have this issue. And you get better results from it with random noise because the temp denoiser filters high freq pixel changes between frames. That’s why fixed noise doesn’t work well because from the algorithm’s point of view there is no temporal noise then.. only spatial. Resolve has a nice denoiser as well, both temp and spatial which you can mix.
That is true, and that is what a cross frame denoiser does, what I´m talking about is about the current implementation of Optix denoiser or current Cycles denoiser.
The problem with NeatVideo is that it does not take into account other passes like Normal or Albedo, si it´s performance is worse than a proper cross frame denoiser for 3d renders.
In the end I´m not saying that neat video is bad, but is worse than an actual denoiser with such features already implemented, maybe NeatVideo could update it´s algos to use additional passes for 3d renders.
Cheers!