FStormRender officially released
Dec 07, 2016 by CGP Staff
20
|
Image by Piotr Lusnia
The GPU unbiased renderer developed by Andrey Kozlov is now officially out. FStormRender features include improved tone mapping, light sampling, a native BRDF model, glare effects, improved raytracer, optimized QMC sampler and a fast displacement implementation that’s light on memory and can be applied to unlimited surfaces. The 3DS Max plugin is integrated with the Max environment and supports all necessary features. A built-in scene converter helps transfer scenes from Corona, V-Ray and Octane renderers.
The software is available through rental licensing at €20.00 / month. Support for other 3D applications is already planned. Check out the image gallery and find out more on FStormRender’s website.
The software is available through rental licensing at €20.00 / month. Support for other 3D applications is already planned. Check out the image gallery and find out more on FStormRender’s website.
CGI Honey on Strawberry by Chakib Rabia, rendered with FStormRender
What is the difference between this and any other render engine in the market?
I mean, with Corona the thing was clear, price and quality/speed ratio over CPU.
But in this case?
Rental only license, and a feature limited GPU render engine, what si the difference between this and iRay, Arion and/or Octane?
Cheers.
http://www.ronenbekerman.com/unbiased-gpu-rendering-octanerender-vs-fstormrender/
Thats a very biased and uprofessional article, I dont know anything about the case and am not a user of thoese renderers but reading that blog post I felt they have deicided to side with Fstrom and are promoting it over octane, but they dont have the octane side of the story and its probably not good to make judgements on an ongoing case. Anyway it made it want to avoid both these renderers too much going behind the scenes. Risky IMHO.
Funny. First sentence you say it’s very biased and unprofessional, next sentence you say that you don’t know anything about the case.
First of all, yes it’s biased. And that’s also one of the first things I wrote, surprised? Are people not allowed to write biased articles? I tried to be VERY clear that it is biased and people who don’t want to read don’t have to.
Secondly, my point was not to promote FStorm over Octane. I am still partially an Octane user and if you had read pt 3. you’d know that I dedicated it entirely to “getting started with Octane”.
But let’s get back to the case. First of all, you say you don’t know anything. In contrast with that, I’ve been talking closely to Andrey, AND Jules, CEO of OTOY, over the phone. Yet he’s really nice to talk to and had some great points, he didn’t say anything that would in any way imply that my conclusions are false. So I guess you don’t know anything about the case, but obviously I do.
I certainly do not know everything, and that’s also why the article starts with “there could eventually be revealed new facts that changes things.” But you clearly missed that part.
You say that the article does not give Otoy’s side of the story. No it doesn’t, because there simply is none. They haven’t told anything, they haven’t either acknowledged or denied the story about them. Which makes it free to draw whatever assumptions we want out of the information that do exist. Until they give me any reason to believe otherwise, I will continue to stand strong with my article.
And yes, either using Octane or FStorm IS risky, I wouldn’t say anything else. I also would NOT recommend a big company to adapt their whole workflow to any of the renderers. Let me give you a few examples:
1. I made a model bank entirely for Octane during the version 1.5. Took me countless of hours. When changing to Octane v2.0, they made some big changes to the core which ment that opening ANY asset made in 1.5 came totally without materials and render settings. Basically, I, among many others (countless of angry threads on their forum), could not use the model banks we’ve spent so much time to create. Eventually an octane user got tired and made a converter for it which “kind of” worked, but Otoy themselves never came with an official converter. They totally screwed us up and never gave us a way out of it.
2. Otoy have been promising Octane Render Cloud, an online rendering service like Rebus. It was meant to be launched with V3. And yes, V3 customers got a bunch of test credits for ORC, but today, a long time after, there’s still no way to buy credits and use it commercially.
3. FStorm is now v1.0 but is still missing a lot of stuff (just like Octane v1.0 did) which makes it unsuitable for other than freelancers. Also since FStorm has a lawsuit over it, we don’t even know if it will exist in 2 years.
SO, to wrap things up, no I do NOT recommend any company to emerge fully to any of those renderers and I’ve never said I would.
BUT, that’s not what this article is about. This article is only, and I mean only, to give you the information that is publicly available, and based on the information that both parties among with other sources have given us this far, I don’t see why the article would be unprofessional.
well my point was that as someone who doesn’t know anything about this case. I was reading it with an open mind hoping to get an insight to what was going on and I did not feel I got it from that. If you are going to putting an article up om well read blog that is making some serious claims then you need to back it up with something more than I have talked to these guys I know them well and look at their Facebook page its got loads of likes.
You read the article because you wanted to know what’s going on and that’s exactly what you got. The things written in the article IS what’s going on and if you’ve missed it, it IS backed up with sources of publicly known information. I’ll I’ve done is a summary.
Since you obviously think that the article is missing something, feel free to fill in the blanks…
Let me also tell you, FStorm facebook group has grown to 7000 users in a year, Octane’s group has grown to 10.000 over I don’t know how many years.
Clearly, people who have tried both, generally tend to move over to FStorm or at least see a great interest in it,which kind of strengthens my conclusions about the pro’s vs con’s.
@Juang3d
A friend of mine that makes mostly architecture renderings is very impressed by this. He says there isn’t much setup work compared to vray and it is fast. Plus images show high quality. Myself i was impressed by that, it is difficult to explain but the whole picture makes more sense. I mean the ilumination, lights and material ilumination seems to be more in sync than other render engines.
If you come from a Vray or MentalRay background I can understand the hype and the coolnes anyone can see on this, but if you come from an Unbiased background, like already knowing Corona, Maxwell and/or other pathtracers I don´t understand this.
In any case the question remains unanswered, what does this have that makes it different from the already existing offer in the market?
I don´t see anything special on it, I donñt say it does not have something special, I´m saying that I don´t see it, and in any case, 20$ month for a license and no permanent license option is a no go for me.
Cheers!
http://blog.boxx.com/2014/10/02/gpu-rendering-vs-cpu-rendering-a-method-to-compare-render-times-with-empirical-benchmarks/
Speed. There are a lot of GPU rendering engines on the market these days. I would venture to say V-ray-RT, Octane & Redshift are at the top of the pack, but FStorm shows some promise and there will probably be something new by next year.
To each their own. For single users looking to utilize pathtracing GPU rendering is one of the better options. If your workflow incorporates a farm the speed differences between CPU vs. GPU will most likely be negligible. However, larger companies are starting to incorporate GPU rendering in their workflow: https://www.redshift3d.com/blog/play-of-the-game-redshift-powers-blizzards-stunning-overwatch-animations
That test is not very convincing… check the comments, it could be empirical, but that does not make unbiased and statistically valid, he or she is taking an scene and rendering it in one situation with one kind of settings in one kind of computer, what xeon was that? what GPU?, what vray settings? etc… not a very good test IMHO.
Of source, GPU rendering is becoming more attractive as the GPU Power/Hour price decreases, and it will become mainstream in some years, but for now is still very expensive and limited, 12Gb of RAM does not fit a scene that needs 64Gb of ram with textures that support 4K, etc etc, you can use render engines like redshift that is off core, that is for sure, but for now I’m on the pathtracers side (personal opinion) and in that side I still prefer CPU render engines well optimized, again, personal opinion.
Cheers!
Juang – try Redshift. It completely changed my opinion on GPU renders. I have massive hundred-million poly scenes that require 64Gb of RAM and take 2-3 hours in V-Ray, but render in less than 10 minutes on Redshift. Most companies in my town are switching now, throwing away their huge CPU renderfarms and replacing them with a couple of machines with multiple GPUs. The cost savings are enormous. The shift is not in the future, it is happening now.
I understand that superrune, but call me a bit “special” but I don’t like Redshift, I mean, for animation it is fast, but it’s quality is not what I’m after, I’m not saying it’s a bad render engine, it’s not, but it is a strongly biased one, in the same manner as Vray and mental Ray, it can be good, but I prefer more solid sollutions for animation that you can only achieve with an unbiased (or nearly unbiased) pathtracer, and the off-core rendering technology in a path tracer is something much more complex as far as I know.
But I respect what you say, it all depends on your needs, on any case I’ll re-test Redshift soon, from time to time I try to test render engines to keep my up to date and change things if it’s better for what we do.
Cheers!
Probably that Juang3d i came from both.
superrune: Do you know why Redshift render 64Gb poly scene fast and why it is even possible to render in GPU?
Even in GPU which can not have that much memory – biggest my know GPU cards have 24Gb vram so it is not possible to render bigger scenes without stripping data (GPU cards can not share their memory space so multiple cards don’t make possible to render bigger scenes, GPU renderers use just one of the GPUs vram for rendering).
So problem has been “solved” by these GPU render engines with this “out of core” tech where CPU strip the “useless” data before sending scene for GPU card’s memory, CPU will strip data by filtering textures, cropping them and resizing textures, also same thing will be done for polygons (like stripping those pixels from textures – also those polygons will be stripped which are out of camera view or which are not important – like rendering billions of polygons when render output is just 3840 × 2160px – which means you can see “only” max. about 8 million pixels = polygons) – CPU will reduce polygon amount small enough for GPU memory versus quality. Biased use also lots of interpolation so polygon and texture stripping is not biggest issue 🙂 so no problem because these is already “stripped”/fake information in final render.
This is my understood from GPU “out-of-core” tech by reading articles and by my own poor tests. Sure it doesn’t matter if you can not see the difference in final animation. But this could be done also with CPU rendering which would make them faster and possible to render almost what ever your computer storage can eat, but it would be just useless process and make rendering quality lower (kind of).
Jama: I am pretty sure that is not how Redshift works. There is absolutely no loss in detail and fidelity on my scenes. As far as I understand, Redshift works by swapping data in and out of the GPU ram, dropping data onto disk when it’s not needed.
Here is from their FAQ:
“What is “out of core” rendering?
Redshift has the capability of “out of core” rendering which means that if a GPU runs out of memory (because of too many polygons or textures in the scene), it will use the system’s memory instead. In some situations this can come at a performance cost so we typically recommend using GPUs with as much VRAM as you can afford in order to minimize the performance impact. Certain types of data (like textures) actually work very well with out-of-core rendering. This means that even if your scene uses tens of 4K or 8K textures (i.e. several GB worth of data), you can still expect great rendering performance!”
? hmm.. interesting… The GPU rendering doesn’t allow use all memory available from your GPU cards (multiple), it can use only 1 unified vram memory, even GPU cards with two separate GPU chips and vram memories will be used by GPU renderer using just one of them, not both of them, because of architect design, because both GPU units are kind of own independent systems, if you want to render image then both gpu units has to know same information and if both units has own vram then information must be duplicated for both units, same thing with system memory – you can not share system memory with independent system – (in my knowledge).
So if this “out-of-core” technology is using system’s memory (RAM) then it will hit performance impact because of CPU rendering (not because of bandwidth, because GPU rendering can not outsource its memory – at least what I have read, you free google more information about this. So basically Redshift is just like Thea renderer – nothing special, it just turns from GPU to CPU on the fly – this is my
hypothesis. So then it is biased engine which already “cheat”/fake rendering quality with interpolation and other techniques – because biased render is already “cheating” it could easily use algorithms which would strip/crop/filter/recycle the data based on resolution (just pre-calculate/pre-process in CPU max. pixel amount and then crop / reduce polygon and texture size to fit your view for GPU rendering vram – each frame, dynamic process of course… And I know, it is easier say than do it. I just said it because it would be as much cheating what biased renders do in worst cases – why fake just some of information when you can do a lot more without seeing the difference?).
Btw. Have you tested Thea render? It is also very fast and it is most likely similar than Redshift3d, “out-of-core” aka hybrid render engine, this is my best guess, sure I could be wrong, maybe time will tell us. But vram is not any issue with newest AMD Pro card (1Tb, yup, that SSD memory GPU card) – And vram is going bigger every year so who knows… maybe we will soon have GPU cards with higher memory than our systems has (like amd’s card with their SSD solution 🙂 – no more worries about polygon amount or texture sizes and no need for fake calculations or recycled data)
@Jama, I really like the discussion you got going here 🙂
Pretty much every renderer is biased because there is no such thing as “unlimited” bounces in a render engine. Even Corona, which is awesome yeah, is biased to an extent.
What I personally like about these so called “unbiased” render engine is that they hide a lot of the bias by hiding settings and optimizing the ones that are important. The render times are comparable (at least to me) to some of the more biased solutions and what is really great for me is that I can get a desired result faster.
V-Ray, being the usual suspect for a biased render, can run in a pretty much unbiased mode and it can be surprisingly fast too. It even has the progressive option. It has its weaknesses too, like every other renderer out there.
Ultimately the final render times are not that important to me. With a dual Xeon or good GPUs you are pretty much set on that department in my book.
Also, out-of-core rendering is not CPU rendering. Yes, the render engine offloads the textures to RAM and that has a performance hit (some “reports” suggest up to 20%) but why that works is because the bandwidth (from CPU to RAM through the PCIE lane to the GPU) is apparently sufficient. You are not clogging the lanes with 40gb of data each second anyway…
Thats my take at least, always take it with a grain of salt even if I say I researched it 😉 There are waaay smarter people out there that can probably explain in detail whats this and whats that.
Awesome discussion people!
Yup, there is no true unbiased engine out there but there are very highly biased engines like Redshift3D, Mental ray etc. Nothing bad if you can’t notice the difference (and speed is the thing what you need) then it doesn’t matter do you use highly biased engine or not, but there are also cases where you gonna need unbiased (engine which try to avoid cheating as much as it is possible).
Unlimited bounces – that is extreme example from real life and it doesn’t matter after billion bounces because I have feeling (yup, just a feeling) that even your eye can not see the difference in real world when you try to find light point which has only 0.0000000001% left from original value (of course the original value is relative) but for example if your monitor can not even draw that value then it is useless, or rendering color depth which eye can not see (human eye capability is also limit factor, it is useless render more colors than human eye can see for the final product – for example 10x more colors than eye can see just for silver screen – that is madness 🙂 , but sure 10x more data will be good thing for post production, but it is useless data for final movie)
Best engine out there is absolutely Maxwell render – most realistic render engine, damn it is slow but only the end results matters if you are true artist :)>
Now, if I could just be a good artist I would be happy but no… I suck 🙁
Jama – that is not correct, Luxrender is unbiased and takes no shortcuts.
“LuxRender is a physically correct, unbiased rendering engine. This means that LuxRender does not use tricks to imitate real world behaviour: all calculations are done according to mathematical models based on physical phenomena. In LuxRender we will always make the ‘unbiased’ design choices.”
http://www.luxrender.net/