Message boards : Number crunching : ATI vs NVIDIA GPUs
Author | Message |
---|---|
I try to answer some of the doubts that I see in the forum. | |
ID: 7550 | Rating: 0 | rate: / Reply Quote | |
Nvidia seems so far better in GPGPU programming with CUDA and GPGPU hardware support. For a complex code, this is the best solution until OpenCL comes out and ATI adapts their hardware. That is right. CUDA is currently the much more matured SDK than Stream is. Implementing a complex algorithm with CUDA should be much more straightforward than on ATI hardware in the moment, which makes the life easier for the developer. Furthermore, some of the hardware functionality needed to speed up some algorithms is only implemented in ATI's HD4000 series (with support in high level languages like Brook only slowly arriving), whereas on the nvidia side such features are already available in earlier cards. We will have to wait another 3 months or so before one can start to compare the OpenCL implementations of both manufacturers. Concerning the single/double precision performance of ATI and nvidia you are also right. With single precision nvidia and ATI are roughly on par (ATI claims only slightly higher theoretical values). But when going to doubles the nvidia performance reduces to about 8% of the single precision throughput, while ATI is able to sustain 20% of its advertized single precision peak performance. That means a HD3850 has about the same theoretical double precision performance as a GTX285 and a HD4800 series card is significantly faster for such things. | |
ID: 7554 | Rating: 0 | rate: / Reply Quote | |
Hi, | |
ID: 7555 | Rating: 0 | rate: / Reply Quote | |
Hi, I know. I only wanted to quantify that "much less than single-precision performance". By the way, could you have a look in the credit calculation thread? | |
ID: 7557 | Rating: 0 | rate: / Reply Quote | |
> By the way, could you have a look in the credit calculation thread? | |
ID: 7560 | Rating: 0 | rate: / Reply Quote | |
Theory is beautiful but life is brutal: | |
ID: 11050 | Rating: 0 | rate: / Reply Quote | |
Hi, | |
ID: 11051 | Rating: 0 | rate: / Reply Quote | |
Another horrifying statistics: | |
ID: 11052 | Rating: 0 | rate: / Reply Quote | |
I have a favour for administrators. Standardize the scoring so that there are no such irrational differences.... | |
ID: 11055 | Rating: 0 | rate: / Reply Quote | |
Milkyway released their source code and people wrote optimized apps. Some of the improvements made their way into the standard client, but the last batch (about a half year old) of optimizations didn't. The optimized clients for cpus are about a factor of 4 faster than the stock client. For the stock client the credits are OK, but optimized ones get proportionally more. | |
ID: 11084 | Rating: 0 | rate: / Reply Quote | |
When we can expect such such optimized apps for GPUGRID? | |
ID: 11101 | Rating: 0 | rate: / Reply Quote | |
The problem is not the optimization but the multiplication. An ATI card in double precision cannot perform better than Nvidia in single precision. | |
ID: 11102 | Rating: 0 | rate: / Reply Quote | |
Ok, so is it possible to make for GeForce such optimized apps using CUDA? | |
ID: 11103 | Rating: 0 | rate: / Reply Quote | |
And 2 images: | |
ID: 11104 | Rating: 0 | rate: / Reply Quote | |
Thomasz, | |
ID: 11206 | Rating: 0 | rate: / Reply Quote | |
In the end, the credits are meaningless. The only way to have any concrete gain from distributed computing is to run folding@home for the Evga team and you can get EVGA Bucks to contribute to the purchase of an EVGA card. | |
ID: 11278 | Rating: 0 | rate: / Reply Quote | |
At GPU-Grid you can already get more credits per time from your nVidia card than at seti, because GPU-Grid is more optimized (and may have more GPU-friendly code) The cuda 2.3 drivers and dll's make it possible to obtain more credits per time than GPUgrid or very close to the same. I'm getting a crazy ~10 credits per minute on my single gtx core 216 which works out to about 14000 RAC Bob | |
ID: 11280 | Rating: 0 | rate: / Reply Quote | |
In the end, the credits are meaningless. The only way to have any concrete gain from distributed computing is to run folding@home for the Evga team and you can get EVGA Bucks to contribute to the purchase of an EVGA card. You imply that "not being meaningless" equals "concrete gain", something which you seem to define as "getting something of material value" (or something like that). For me I'd say I get lots of concrete gain due to DC. It's a hobby, I can enjoy spending time on it whenever I want to (and other duties permit it). That's nothing to touch, but it's a very real "gain" for me. ... but in the end the point totals only exist to encourage people to participate. 10 points a day and 100,000 point per day are really the same thing. The credits also reflect the amount of contribution. Therefore they also exist to show people how good their systems are at what they're doing. You're right that just by itself 10 or 100,000 credits don't mean anything, as it could be any arbitrary number / scaling factor. However, there's not just one of these numbers. The different amounts of credits enable comparisons between systems and projects (how valid these are is another question) and 10 credits do not equal 100,000 any more when your co-cruncher gets 1000. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 11635 | Rating: 0 | rate: / Reply Quote | |
Explain me this paradox... | |
ID: 11668 | Rating: 0 | rate: / Reply Quote | |
Which has what to do with "ATI vs nVidia GPUs"? | |
ID: 11676 | Rating: 0 | rate: / Reply Quote | |
In the end, the credits are meaningless. The only way to have any concrete gain from distributed computing is to run folding@home for the Evga team and you can get EVGA Bucks to contribute to the purchase of an EVGA card. Which has what to do with "ATI vs nVidia GPUs"? Good Question If we want to be so pure we must make other tread? ____________ POLISH NATIONAL TEAM - Join! Crunch! Win! | |
ID: 11679 | Rating: 0 | rate: / Reply Quote | |
Are there any new developments on the ATI GPU front for GPUGRID? | |
ID: 12983 | Rating: 0 | rate: / Reply Quote | |
Are there any new developments on the ATI GPU front for GPUGRID? GDF said there would be ... I don't know, ooops yes I do: which thread... but it is about here someplace... Einstein hinted that they have OpenCL in the works too, but it is lagging behind the CUDA version they are working on ... I do not know that there will be a mad rush ... or that this will lead to Mac OpenCL apps at the same time ... but I can hope cause I have a mac that I could run GPUs in too ... | |
ID: 12985 | Rating: 0 | rate: / Reply Quote | |
you nailed it gdf. on what i was going to post. | |
ID: 17268 | Rating: 0 | rate: / Reply Quote | |
And SETI BĂȘta is also testing a MultiBeam app. rev177 & rev.234(?) and a | |
ID: 21234 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : ATI vs NVIDIA GPUs