Advanced search

Message boards : Wish list : GPU Comparation

Author Message
Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13775 - Posted: 3 Dec 2009 | 19:17:38 UTC
Last modified: 3 Dec 2009 | 19:18:10 UTC

I wish there would be an statistic with GPU comparation. It wouldn't be too hard to do:
- You make a "standard" 1 to 5 minute unit
- Send it to every computer (volunteers)
- Compare times with CPU, Memory, GPU, ...
- Have a SQL over the result with mean, standard dev,...
- A few graphs...

With little effort it would be easy for people like me who are getting crazy to decide what GPU to buy...

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 13781 - Posted: 4 Dec 2009 | 18:43:15 UTC - in response to Message 13775.

You don't need all that.
For Nvidia GPUs the peak flops is a very good estimate of the speed.
just look up in wikipedia the flops of the different Nvidia cards.
(Best single core is the GTX275 from my point of view)
gdf

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13842 - Posted: 8 Dec 2009 | 20:09:20 UTC - in response to Message 13781.
Last modified: 8 Dec 2009 | 20:28:04 UTC

Yes, but I wonder the effect of main memory and CPU. Right now I'm building one computer with 3 GPUs
For the price of one GTX275 you can by 2 GT9800. I guess that even GPU takes most of the computing power the mainboard, memory size and type has an important effect.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13859 - Posted: 9 Dec 2009 | 23:02:32 UTC - in response to Message 13842.

Stay away from the GT9800s!

Do some reading,
http://www.gpugrid.net/forum_thread.php?id=1506

Get a GT200b or better card or dont!

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13883 - Posted: 11 Dec 2009 | 23:35:02 UTC - in response to Message 13859.
Last modified: 12 Dec 2009 | 0:13:45 UTC

Not really sure where's the problem. At SETI CUDA I have seen many post with people with errors. CUDA 3.00 is beta. I guess too this "home" hardware is not 100% prepared to run it 24h/7. But if you buy a server GPU it rises up to a minimum 300 € or higher.
Finally I bought 3 different GPUs I will run them in same motherboard and I could tell if there's a significant differece.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13912 - Posted: 13 Dec 2009 | 20:07:40 UTC - in response to Message 13883.

When you ask for advise, perhaps you should listen to it!

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13926 - Posted: 14 Dec 2009 | 19:00:58 UTC - in response to Message 13912.
Last modified: 14 Dec 2009 | 19:10:14 UTC

When you ask for advise, perhaps you should listen to it!

Right, but before make a statement you should have all the information.
First I already bought the cards.
Second, I have a meeting next month with an expert from a company to use this technology in 30 computers for bussiness. 30 x 300 is a lot more than 30 x 100. After talking with him I think I'll have a proffessional point of view.
Third, there's a lot of post here and in SETI that say different things. I'm getting confused with all the oppinions. I prefer to test the cheap option before the expensive one.

Just playing with science.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13928 - Posted: 14 Dec 2009 | 19:42:53 UTC - in response to Message 13926.

WRT buying cards that can participate at the GPUGrid, I would Much rather you chose one GTX275 than three GT9800 cards!
First reason. The GTX275 works 100% here.
Second reason. The GT9800 will not work well with all tasks and you may get more failures than successes.
Third reason. Even if the GT9800 managed to get through all the tasks, and they wont, three cards would do less work than one GTX275. In fact it would take 4 GT9800 cards to come close to the performance of one GTX275.
Fourth reason. If you have 3 cards, there is more chance that one will fail.
Fifth reason. 3 GT9800 cards use more electric.
Sixth, and most important reason. The experts Here are saying what will work Here.

What is the point in retrospectively asking IT experts here about what you should do, if you have already spent the money?

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13942 - Posted: 15 Dec 2009 | 9:08:21 UTC - in response to Message 13928.
Last modified: 15 Dec 2009 | 9:09:02 UTC

The GT9800 will not work well with all tasks and you may get more failures than successes.

I din't know this

What is the point in retrospectively asking IT experts here about what you should do, if you have already spent the money?

The money I spent for now it's just for testing. Really he is not a IT expert, he is a CUDA expert. I'm a IT expert.

What I whant from him is knowing why there are so many failures in cards?. If is a software or a hardware failure?. It really depends on models???

Third reason. Even if the GT9800 managed to get through all the tasks, and they wont, three cards would do less work than one GTX275. In fact it would take 4 GT9800 cards to come close to the performance of one GTX275.

According to this one and $/Gflops (I don't care for credit) this is not exactly true (see table below).
GTX275=0,21$/Gflops
GeForce 9800 GT=0,12$/Gflops

Fourth reason. If you have 3 cards, there is more chance that one will fail.

I dought this statement. If you have 3 cards is much less probable that all 3 of them break (This is statistical fact). Here I wonder if the task is shared between 3 task, but as long as I have seen is not like that.

Sixth, and most important reason. The experts Here are saying what will work Here.

For sure if you whant a high level card that one is the best. But the post I followed was this one. Here analices the $/GFlop. I made a few numbers (source wikipedia Flops and prices):
Model ------------------> $ -----> Gflops ----->$/Gflops
GeForce 9800 GT ---------> 60 ----> 504 ----> 0,12
GeForce GTS 250 --------> 140 ---> 705 --->0,20
GeForce GTX 260 -------->150 ---> 715 ---> 0,21
GeForce GTX 275 -------->210 ---> 1010,88 ---> 0,21
GeForce GTX 295 -------->470 ---> 1.788 ---> 0,26


I still think all the information around forums is too messy, and quite complicated to read throught. I think it would be much better to make a wiki-gpugrid between a few people and organize all the information... too much waste of time (for me this is much more important that the cost of electricity and cards).

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 13950 - Posted: 15 Dec 2009 | 17:12:45 UTC - in response to Message 13942.

In practice,
there is a bug in the CUDA FFT which Nvidia does have the time to fix for older cards like 9800. The cheaper card that is error free so far is the GTX275.

GDF

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13952 - Posted: 15 Dec 2009 | 17:54:53 UTC - in response to Message 13950.
Last modified: 15 Dec 2009 | 17:58:58 UTC

In practice,
there is a bug in the CUDA FFT which Nvidia does have the time to fix for older cards like 9800. The cheaper card that is error free so far is the GTX275.

GDF


If I knew this before I would have saved money and I spend many hours reading the post to try to find thin info :o( :o( :o(

Thanks GDF... This confirms one of my principles "you always have to ask the one really knows the answer".

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13959 - Posted: 15 Dec 2009 | 21:44:46 UTC - in response to Message 13952.

A GTX260 216sp with a 55nm GT200 Revision B core will work perfectly too.
It is not quite as powerful as a GTX275 but it is less expensive. Just watch out for all the variants of this card. Some have a GT200 core and some are 65nm. Dont get one of those.

You would be better off getting a GT 240 than a 9800GT !
Once you add in the correction factor, they work out to have the same performance. The GT 240 however should be more reliable in terms of completing tasks, is slightly more future proofed in terms of technology, and will use less electric!

As both the GT 240 and the 9800GT cost around $100, over time your total expenditure would be less and your total contribution more, if you choose wisely.

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13961 - Posted: 15 Dec 2009 | 23:05:24 UTC - in response to Message 13959.
Last modified: 15 Dec 2009 | 23:18:21 UTC

Thanks very much, after all I saw the light.

Some have a GT200 core and some are 65nm. Dont get one of those

I can't even find that info in NVIDIA web pages, nor in the box, nor in the card I have in may hands (not going to open it and loose the warranty).

I got sick. I get one GTX275 and forget about this. I recommend people that get into same situation take this decisition at first (just watch out your power source, the card has over 200W consumtion).

I recomend admin to complete rewrite/delete the article in the main page. Just ended with a terrible headache!!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13965 - Posted: 16 Dec 2009 | 14:26:23 UTC - in response to Message 13961.

I agree that GPU information is hard to come by. It can be found, but it takes a lot of individual effort and learning. Not good for newcomers to the project.
Having struggled through this myself I put together a list of suitable cards (and identified common unsuitable cards) with their performance details and started a thread - my attempt at tidying things up so that others would not have to find out the hard way.
Fortunately I was able to get enough information, with plenty of help from others, to make it worthwhile at the time. But unfortunately, recent developments have undermined these efforts and I have been trying to understand what has been going on.
So here is my take on things:

GPUGrid first stopped supporting G90 cards, which is understandable, given their design inefficiencies, relative to the newer cards (with G92 and GT200 cores) and the continuing development of CUDA.
It now looks like the project is pulling away from G92 cards. However this seems less by intent than by the product of several uncontrollable factors; CUDA is written by NVidia not by GPUGrid. As CUDA develops the dependent science projects are pulled along. While most of the developments are good, some result in a loss of support for older cards. Obviously, if NVidia no longer makes the GPU chips, they will not be writing code to support them! So the GPUGrid project is faced with a fundamental control problem. At the minute the information on which cards work is not only scattered and difficult to find, understand and confirm, but much of what has been said is now also out of date.
G92 based cards should still work, but there are now large numbers of errors being reported making them inefficient. There are several differences in several GPUs that make them either compatible here or not. GPU architecture can change dramatically with only slight name changes to the card. This is down to NVidia and makes choosing a card for this project difficult.
For example:
GTS240 uses a 55nm GT92b core
GTS250 use either a 65nm or 55nm GT92a or GT92b core.
GTX260 and GTX280 use either a GT200 or a GT200b core
While the GTX275 uses a GT200b core.

The mobile cores are even more confusing, and it is generally not recommended to use them.

So this is my general opinion.
Dont get a G92 GPU. They appear to have poor task completion rates. I cant see this improving in the long run. In the short term perhaps shorter tasks will improve things.
Dont get a GT200 - mainly because you can get a GT200b that has better task completion rates.
If you must buy a new, low end to mid range card, make sure it has a new core (GT215, GT216, GT218) and not a GT92 or GT96 (no idea how this performs).

I can try to put together an update to my recommended list, but it will take a while and this would need to be checked over and corrected by others before being put on an opening page link.
It would also have to be changed soon. CUDA3 has been released, so no doubt tasks will be written for it soon. GT300 cards are due to turn up within the next 14 weeks and top end ATI cards might be usable soon-ish too.

When I put something together I will post it here,
http://www.gpugrid.net/forum_thread.php?id=1150

Profile Damaraland
Send message
Joined: 7 Nov 09
Posts: 152
Credit: 16,181,924
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwat
Message 13966 - Posted: 16 Dec 2009 | 15:06:13 UTC - in response to Message 13965.

I agree with you, but I think the post should be easy enought so anyone can understand it without too much effort.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 13968 - Posted: 16 Dec 2009 | 21:11:19 UTC - in response to Message 13966.

Well its there, and its simple. Hopefully it is detailed enough to act as a guide.
It is up to the buyer to actually find out all the details from the seller. I cant do that for people, just tell them what they should get. http://www.gpugrid.net/forum_thread.php?id=1150

Basically I did not recommend any G92 cards or G200 (First Ed.) cards. This should avoid most problems in one sweep. I left out any mobile cards and OEM, as you cant shop for these, and should not really be using a laptop video card.
This does not mean those cards will not get results, many will, but I want to keep it simple and recommend cards that will be most likely to get good results, and be future proofed WRT this project. I cant see G92 cards being used here for the entire 2010 year and I expect GT200 cores with the first fab release had problems that were subsequently overcome with GT200b.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 755,434,080
RAC: 186,180
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 14146 - Posted: 6 Jan 2010 | 22:17:50 UTC - in response to Message 13950.
Last modified: 6 Jan 2010 | 22:21:37 UTC

In practice,
there is a bug in the CUDA FFT which Nvidia does have the time to fix for older cards like 9800. The cheaper card that is error free so far is the GTX275.

GDF


Does Nvidia make the source code for the FFT routines available? I'm thinking of learning enough CUDA to have a try at fixing it. I could then test it on my 9800 GT.

Also how practical would it be to make a CUDA software build for the recent Nvidia cards with the CUDA 2.3 SDK, and a separate software build for the G90 cards with an older SDK that still supported the G90?

I am NOT still able to do hardware work for my computers; the company I have found that will do it for me (HP) does not offer computers with anything higher than the GTX260.

In the meantime, would it be practical to offer two separate lists of tasks, one of which will run on a G90 and one for which a G90 is not reliable enough?

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 755,434,080
RAC: 186,180
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15452 - Posted: 26 Feb 2010 | 1:30:52 UTC - in response to Message 13942.
Last modified: 26 Feb 2010 | 2:08:31 UTC

The GT9800 will not work well with all tasks and you may get more failures than successes.

I din't know this

I'm not sure that's true for all 9800s. My 9800 GT appears to succeed for most of the workunits it gets. However, I don't know if the server is sending it only the type of workunits it can handle. I believe it uses a G92b instead of a G92.

However, I've seen some articles saying that the GT240 is now more cost-effective for GPUGRID than the GTX275 or the 9800 GT, so I'm thinking of ordering some of them. For example, two GT240s can run on about as much power as just one 9800 GT, so if there are enough card slots I can probably double my GPU computing power without replacing the power supply.

If you have fewer empty slots but plenty of power supply and cooling capability, the GTX275 is probably still the better choice, though. It's also a better choice if GPUGRID is planning to start requiring cards with compute capability 1.3, but I don't remember seeing that mentioned.

Use for something other than GPUGRID may require choosing cards with more memory, though, even though these cards are less cost-effective for GPUGRID.

Another thing - the GT240 is more recent than the 9800 GT, so Nvidia is likely to continue offering good support for it longer.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15465 - Posted: 27 Feb 2010 | 1:09:09 UTC - in response to Message 15452.

Generally I would say anything that is not a G200b or above could have reliability issues, and I would recommend getting a newer card over an older card every time. However, it is very difficult to say don’t use this card or that card, because there are so many versions, and some seem to work while others just don’t. I expect there is a big difference between a G92 and a G92b. The G92b is a revision, and no doubt overcame issues with the previous version.

The GT240 is an easy card to use; no special power requirements and not bulky, so its compatible with most systems. It is also a flexible card; having modern technologies and most come with HDMI, a 15 pin Video connector and a digital connector. The better ones obviously have DDR5, 1GB RAM and are slightly factory overclocked, but I found even the 512MB DDR3 cards still perform quite well (there is about a 15% performance difference within the range). One of the cards I have (a Gigabyte GV-N240D3-1GI, has a bigger fan than most) uses 1GB DDR3 and the GPU is slightly factory overclocked, so it actually matches up well to a natively clocked DDR5 card. It is worth noting that if you get a factory overclocked card, you can usually overclock it slightly more than a card with a default factory setting, but overall there is little difference. So it is not certainly not worth paying 30% more for a card that is slightly overclocked.

The GT240 cant be used on Milkyway; GT240s are CC1.2 and Milkyway require CC1.3 cards (GTX260 and up). But that’s their loss, and a naive move in my opinion; they are much closer to CC1.3 than CC1.1 cards (GT240s use a GT215, and benefit from similar improvement factors). Given that there will only be between 5000 and 8000 Firmi cards released, and perhaps only a few will end up in the hands of GPUGrid crunchers, I don’t think GPUGrid can afford to move away from CC1.2 cards any time soon, and if they do they are likely to first move away from CC1.1. Although there might be new mid range NVidia cards in the summer or autumn, they may be anything from CC1.2 to some new unknown Compute Capable rating.

I don’t think that there are any NVidia based GPU projects that require more than 512MB RAM. GPUGrid uses GPU RAM proportionally to the number of GPUs and Shaders, and is task dependent. On my GT240 cards, tasks tend to use between about 250MB and 350MB. Overclocking does not change this. The rise in memory controller usage from 28% to 30% (about 7% increase) makes sense given that the memory is being called faster.

Of course, if your system is in a large enough case, has one PCIe2 slot, a good power supply, a GTX 275 if you can find and afford it, would do almost 3 times the work of a GT 240. Mind you it will also use slightly over 3 times the electric.

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 755,434,080
RAC: 186,180
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 15479 - Posted: 27 Feb 2010 | 16:00:03 UTC - in response to Message 15465.

The GT240 cant be used on Milkyway; GT240s are CC1.2 and Milkyway require CC1.3 cards (GTX260 and up). But that’s their loss, and a naive move in my opinion; they are much closer to CC1.3 than CC1.1 cards (GT240s use a GT215, and benefit from similar improvement factors). Given that there will only be between 5000 and 8000 Firmi cards released, and perhaps only a few will end up in the hands of GPUGrid crunchers, I don’t think GPUGrid can afford to move away from CC1.2 cards any time soon, and if they do they are likely to first move away from CC1.1. Although there might be new mid range NVidia cards in the summer or autumn, they may be anything from CC1.2 to some new unknown Compute Capable rating.


I've read enough on the Milkyway site to find that their science cannot get useful results without large parts of the calculations in double precision, and therefore have no real choice about using cards that won't handle double precision.

As far as I can tell, it would be easier for me to afford a GTX 275 than to get it installed. I'm looking into higher memory, but mainly in order to be able to have more chance of participating in future GPU BOINC projects closer to the types of medical research I'm most interested in.

Post to thread

Message boards : Wish list : GPU Comparation

//