CGPress uses technology like cookies to analyse the number of visitors to our site and how it is navigated. We DO NOT sell or profit from your data beyond displaying inconspicuous adverts relevant to CG artists. It'd really help us out if you could accept the cookies, but of course we appreciate your choice not to share data.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Wouldn’t it be cool if this video was rendered in real-time by the very same GPU that’s in it?
For years gaming drove the development of video cards. Is BitCoin the driving force now? Man would I love to build a 3d PC around 3 of Titan V’s.
yes that would be cool!
but you forgot that Nvidia is as greedy basta** as Autodesk, Thus they have disabled SLI on this model !
It’s a professional card – do any pro apps use SLI?
All the ones I’ve seen that use multiple cards like Photoscan, Redshift etc. don’t use SLI anyway.
For computational Part that is based on OpenCL, no SLI needed,
But i didn’t read anywhere that confirms you can put two of those in one PC, maybe that’s possible,
Or maybe if you want the true power of multiple GV100 you have to buy their Shiny ready to go server with 8 of them, for about 150K .
I don’t see any reason why you shouldn’t be able to put multiple cards in.
“Steve Green
I don’t see any reason why you shouldn’t be able to put multiple cards in.”
That was reassuring, Deep down inside i knew that selling my Left kidney was for nothing XD
(anyhow thanks for the info Steve)
It’s not that it’s a professional card – it’s a niche-market card, for AI research.
The 110 TFLOP/s figure only kicks in if you’re using its new ‘tensor cores’ for multiplication of large matrices, which is currently only used in artificial intelligence research.
For general compute, like rendering & simulation, it’s 10-12 TFLOP/s. That’s about the same as a GTX 1080 or an RX Vega, which you can already buy anywhere for $700 or less.
So if you want value for money, just get a high-end gaming GPU. If you want the ultimate in performance, no matter the cost, get *several* high-end gaming GPUs. Only consider the Titan V if you’re already doing AI stuff.
Of course, developers will eventually find a way to make use of tensor cores for stuff like fluid dynamics and *maybe* for rendering, but it’s pointless buying hardware for software that hasn’t been written yet.
From the Redshift forum.
“Hey guys,
The other day we got Redshift running on it! It runs faster than the GP100! It still needs lots of optimization but we’re seeing some healthy percentage increases versus the TitanX (between 35% and 65%). Who knows, these might go even higher moving forward.
Thanks
-Panos”
Tensor cores are matrix multipliers, and extremely ineffective at compute tasks. It would be like running rendering tasks through a calculator. If they were more effective than the cuda cores, like you say, about a 1000 percent more effective. Why put cuda cores at all?
Steve,
A rendering speed improvement of 35-65% over the GP100 or Titan X is in line with what you’d get from a current $700 gaming GPU.
If you need more GPU power for Redshift, wait for NVidia to release a gaming GPU with HBM2 (they’ll have to, if they want to keep up with AMD) and get two.
Hi Alex,
merely repeating what has been said on the forums if people want to ask about it.
I’m quite happy with my dual 1070s for a good while, I haven’t bought a pro card since a Quadro back in the day and worked out it wasn’t worth the extra money.
I can’t ever see me buying another one. More interested in what (if any) trickles down to the gaming cards eventually.
Let’s not forget that it doesn’t come with Nvlink. Heard a few people talk about how cool this GPU setup is because of Nvlink but… There is no Nvlink 🙂
The only problem with nvidia is, they make it 10 times pricier for the same chip with modifications, when you dont use it for gaming.
search: geforce vs quadro on youtube.
I believe this is like that, because nvidia is kind of a monopoly.
I say quadro only deserves double the price of geforce at max.
Thank you, that’s what i’m talking about
I remember a few years back where the drivers from nvidia could easily be manipulated so your gtx gaming graphics card becomes all the quadro features.
They found a way to block this but it was totally possible and we did it on our workstations to easy safe money
I think they did not block anything, they simply recognized that they were the same chip, and what you get today in a quadro is exactly the same as a GTX, the main difference is the RAM amount, also you may get other differences that are actual physical differences in the chip, that´s why a Tesla is not a Quadro.
I totally agree that as of today having a Quadro is a waste of money, you will end up wanting to upgrade your GPU in 1 year but the cost of a Quadro is a lot bigger than a GeForce so better to save up for a new GeForce in a year than expending all in a Quadro today.
My personal opinion of course.
Cheers.
SLI can impede Cuda performance. These are processing cards that are cable of drawing a high quality 3D viewport. Perfect for a DCC app and a cuda render plugin such as I-Ray or Redshift.
A much better investment than dual xeon rig in my opinion.
Ironically, the Quadro drivers are seldom updated.
Ironically, Quadro cards are a scam.
Also, this is not a Quadro card. Titans are in between.
AI is pretty cool but what about raytracing? When will GPUs have dedicated raytracing structures? Everything in the realtime graphics world is converging to hybrid rasterization/path-tracing techniques.