Unreal Engine 5 preview with almost unlimited geometry and real time GI
Epic has released the first public demo of Unreal Engine 5, demonstrating some of the impressive technology we can expect from next year’s release. The two headline features shown in the demo, which is running on a Playstation 5 are Nanite, which promises an end to polygon budgets, draw calls, and LODs; and Lumen, a new real-time fully dynamic global illumination solution.
According to Epic, Nanite creates “virtualized micropolygon geometry” to allow the engine to render potentially huge polygons. Epic claims that users will be able to import assets that are “film-quality […] each with hundreds of millions or billions of polygons. […] Nanite geometry is streamed and scaled in real-time so there are no more polygon count budgets, polygon memory budgets, or draw count budgets; there is no need to bake details to normal maps or manually author LODs, and there is no loss in quality.”
Lumen promises fully dynamic global illumination to renders “diffuse interreflection with infinite bounces and indirect specular reflections in huge, detailed environments, at scales ranging from kilometres to millimetres.[..] Lumen erases the need to wait for lightmap bakes to finish and to author light map UVs—a huge time saving when an artist can move a light inside the Unreal Editor and lighting looks the same as when the game is run on console.”
The demo also shows several features already available in Unreal Engine 4.25, including Niagara VFX improvements, Chaos physics and destruction, animation system enhancements, and audio features.
A beta of Unreal Engine 5 is expected in early 2021, with a full release later that year. Find out more on Epic’s blog.
This is very cool. My only question is… How they will manage the assets/texture sizes on new consoles with 800Gb limitation?
Add 1000 assets like this ones, you go over 1Tb in a very easy way.
Downloading the game per level/mission might be a reality… Or compression/decompression per level. That will probably add to loading times. I guess expect high loading times?
The rumors are for the next nvidia to use tensorcores for faster compression/descompression, but still. The next gen consoles almost decrease the disk space (PS4Pro 1Tb, PS5 850Gb). And we talking about 10 to 20X more polys and 8K textures…. this needs a lot of space anyhow.
The console isn’t going to manage the raw assets, it never had to. It is only running the compiled and optimised data. For streaming geometry, that can happen in the cloud. Chack out MS flight simulator 2020.
Then you move the bottleneck to your internet speed. Even the new microsoft uses highly optimized 3d meshes to be able to run in slower internet connections, and they have some things that made it possible like your flight speed is not that big so its “easier” to download assets overtime. I totally see this being used instead of offline rendering in some scenarios. But for videogames? I doubt we will see. It extensively on this next generation.
with respect Elio as good as you are at gathering news from the internet for your very interdarsting youtube you are not understanding this from a software dev point of view – we software devs in games are very much further ahead than the typical 3dmax technician. I have faith this is a game chnager and will be very welcome by the community of Unreal and Game devs. Tahnk U sir.
well, look at the game ‘rage’ it had the problem of being too much data for a bluray and had to be heavily compressed. then pc users could download a huge texture map pack to make things somewhat better.
The details can also be filled in procedurally or using ML
I haven’t watched the video yet but I’m guessing a combination of level streaming and occlusion..? It depends on the game design I suppose. I doubt the old techniques will go away any time soon.. still, its nice to have decent real time lighting and unlimited polys finally. edit – strange, all the previous replies were only visible after I posted.. so yeah, what everyone else said!
Machine Learning filling in the gaps for missing polys for sparse meshes – that’s the direction I would go to make up for this. Other than that is another leap in storage capacity
I would say they use a single format that has all values in the polygon. The overhead with UV, textures processing could be then reduced.
If you watch the video carefully, Epic talks a bit about how Nano works, so yes you can start with a very heavy model but it’s processed at import and polygons are reduced to some level while maintaining model fidelity. We’re going to know more in the months to come but I’m pretty sure Epic didnt design UE5 to have just 1-2 games installed at a time on the console.
ok, now it’s getting very serious.
Is it really possible to do realtime GI – I mean would this tech be able to render archviz scene out of the box in realtime or has some sort of pre-calculation been done here.
No doubt it looks 100% amazing – but is the calculation proces really comparable to offline GI renderes Corona / Vray / Octane? Maybe im just gettin old 🙂
If you believe what they said in interviews it’ll be real time without any light bakes. Which I don’t think is outside the realm of possibilities there will most likely be approximation or shortcuts being taken.
And we’ll have yet to see if there was any “post effects” trickery by the engine or if things will look believable just straight out of the box.
But nonetheless if what we saw is “easily” achievable with the right assets without any cumbersome asset prep work I’d say that level of lighting capabilities will be enough for a bunch of people especially if it is real time.
Chaos Group are already pretty much doing this with the project Lavina – https://www.youtube.com/watch?v=eW33uFWoI-M
That’s also dual RTX 8000’s, VS a PS5’s AMD GPU. That’s a big separation in cores, performance, and price of hardware. Granted, Lavina is doing more, but what Epic has shown here is more impressive shown on a next gen console.
Well then they realize they already lost the battle.
If you look very closely, the demo does not have very detailed GI. There are several places you would expect bounces to hit, but there aren’t any. So if you want the most accurate GI, stick to an offline renderer 🙂
There is definitely a bunch of interesting things being mentioned besides the obvious eye candy on the screen.
I wonder how adopted those practices mentioned here will become and how feasible they are.
Not needing to spend time on optimization that much and general faster lighting solution I’d assume will shave some time off those more tedious tasks if everything works as advertised.
Interesting background on the PS5 technical capabilities that help to make this possible > https://blog.us.playstation.com/2020/03/18/unveiling-new-details-of-playstation-5-hardware-technical-specs/
There was a noticeable time lag in secondary bounces when the direct light was moving around the first cave. This is a feature of all temporal based approaches. It could ruin a high paced shooter (diffuse shadows lagging after the character) but 3080rtx might decrese the timelag to barely noticeable. Anyway it looks amazeballs
I am sure this will work on the artist side.
My only concern is on the client side. People we present our work usually dont have rtx cards at this time and we jump through hoops to have them display our work.. because of these constraints we always create half quality of what we can really create in unreal.
Dude, you can render out the realtime rendering for offline presentation. And in case you need it realtime, just wait 2-3 years: this tech is at its infancy, but hell yes this is the future, no doubt about it.
beautiful! glad to hear the emphasis on usability as this will be huge for cinematics adoption – looking forward to some dual 3080ti action on this!