CGPress uses technology like cookies to analyse the number of visitors to our site and how it is navigated. We DO NOT sell or profit from your data beyond displaying inconspicuous adverts relevant to CG artists. It'd really help us out if you could accept the cookies, but of course we appreciate your choice not to share data.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
They gotta work on a faster viewport, fix that nasty undo performance, and finally make Cycles faster! I love Cycles, but for some scenes, it takes forever.
So, in case it’s not clear, speed, speed, SPEED! 🙂
Undo performance is already great in 2.92alpha. And cycles with denoise and usign rtx ray tracing in alpha builds. But its always better to have more speed.
Denoise is great, but it’s near to useless in animation.
Not if you use a temporal denoiser (which, afaik, is not available in Blender).
To be honest Blender do have a temporal denoiser, it’s just hidden because it’s not easy to use and can’t be used nor as a node or as a render time denoiser, also it uses only the NLM denoising technique, so it’s not so useful in super low samples images.
https://docs.blender.org/api/master/bpy.ops.cycles.html
However to use something like OIDN or Optix (OIDN delivers much better results) you can try using our build (BoneMaster) and configuring Dithered Sobol as the sampler instead of sobol, that will provide a much more ordered sampling result (specially if you use it in conjunction with Scramble Distance) and make the dneoiser job way easier, finally if you use some advanced denoise techniques, like denoising per pass instead of denoising the beeauty, you will recover much more albedo detail, in the end you will avoid a lot of splotches and “denoise noise” easily visible in super low sample count renders with denoising in animation.
Regarding speed, you can also use BoneMaster, and leverage the increased sampling speed there, if you use it together with Scramble Distance, which specially accelerates GPU, you will get a way faster version of Cycles 🙂
Finally there are other methods to accelerate cycles, specially if you agree with introducing more bias in the render instead of leaving it as unbiased as possible.
I guess you refer to denoising the Noisy Image, Denoising Normal, and Denoising Albedo passes?
I’ve downloaded Bone Master. Where can I find those options?
Is there some tutorial that covers those topics?
Nope, I refer to use the render passes, assemble beauty using Direct Diffuse, Indirect Diffuse and Diffuse Color, etc… then denoise only the lighting passes, Direct and Indirect, don’t touch the color pass, you will receive a way better result.
Regarding the options, they are in sampling – advanced, inside Pattern – Dithered Sobol, and below Scrambling Distance.
And regarding tutorials, I have a video:
https://www.youtube.com/watch?v=oDplh49kD78
We had a bug back then, so the acceleration you should see should be way better than the one in the video 🙂
Thank you, Juang3d
That was interesting to watch. I knew most of those settings but the AO trick was new to me and that helped.
And you’re right, the noise is less prone to errors once ai-denoised with that approach.
Overall, I think some of the default settings in Blender are just off: samples could be lower by default. Not to mention how messy Mantaflow’s settings can look.
If those things were finessed I’m sure a lot of new users would feel less disoriented and would enjoy the experience more.
P.S. and thanks for that build, better than vanilla!
and I read the disclaimer on graphicall about the GPL license and providing the code. I have the feeling I know where that comes from 🙂
an “E-Cy..” argument something..
You are welcome, I’m happy that helped you 🙂
Well, I think the GPL is what maintains Blender safe from all the fears exposed here so many times, so I think it’s important to properly understand it and comply, or just don’t use Blender XD
I havent noticed any problems with ai denoising in animations if there is samples enough and motion blur in action. But DaVinci Resolve Studio has temporal and spatial denoise that will help when needed. You can also try vector motion blur in blenders compo or in DaVinci Resolve fusion (no studio version needed) this can be also used for noise (fusion’s optical flow).
I always get splotchy animations. In order to be able to use ai denoising and get a clean, splotchy-free animation I have to bump up the samples, often by a lot which kinda defeats the purpose of using ai denoise.
I don’t know how Blender is going to pull this off but we really need a NUKE replacement. Joint force with Natron may be?
What’s wrong with Fusion?
(Not being a fanboy or anything. I am genuinely interested about what professional NUKE users miss in Fusion, so that it would not work as a valid replacement.)
The issue with Fusion for me is that nodes can be complex. Just dealing with pre and post multiplied inputs and outputs drove me mad.
The Nuke each node does one thing and that makes it easy to diagnose problems with a comp. Especially if you pick up someone elses comp. There’s also a lot of hidden power in Nuke like writing expressions in a field, or dragging values in a field to link them to others. The whole workflow in Nuke is very streamlined and actually fun for me.
I jumped from Fusion to Nuke some 15 years ago, and have been wanting to come back. I’m furious about the Nuke subscription terms. One thing I miss when trying to work in Fusion are those little smart workflow things that Nuke does: the keyboard shortcuts for laying down nodes, the way sequences imports at the frames they’re rendered (Fusion imports a sequence starting at frame 800 to 0 in the timeline), being able to edit multiple nodes at once, having a simple sRGB LUT, and many others. You can work really fast in Nuke thanks to these things, but I hope Fusion adopts some of these features at one point.
From what people say, it seems like most of the issues lay in the way Fusion tends to hide the features around in the UI, as most of what you, slebed and equiso named actually is in Fusion, just somewhat obscurely hidden. 🙂
We were looking for Fusion. We needed comp to manage passes and minor tweaks for product viz. At the end we decided for houdini as for basic stuff was good enough, actualy the node system is much more comfortable to use and the HDA system give a lot of flexibility for a pipeline working with a big database of products.
What keep us away from fusion was the lack of a robust headless command line system. In houdini we can send a json from the database that writes all the strings , variables, parameters paths etc and execute. All with few lines of python. In fusion we had to get a lua script that opens fusion, search for some inputs, change them etc. It felt very complicated to automate a flexible, fast, robust comp integrated in our pipeline.
pd: ups, I wanted to answer to Ludwik : P