Message boards : Number crunching : Credits
Author | Message |
---|---|
Just curious as to how the credits are applied to WU's | |
ID: 34940 | Rating: 0 | rate: / Reply Quote | |
Yes, there appears to be a credit leak! | |
ID: 34942 | Rating: 0 | rate: / Reply Quote | |
If the factors are as mentioned ..Yes there is a credit leak.. | |
ID: 34951 | Rating: 0 | rate: / Reply Quote | |
Credit / WU is calculated based on runtime of a WU on one of our machines. So we run a WU on one of our GPUs check time required to run and multiply by credit/time. Then there is also the bonus for the long tasks and also the fact that we usually round the credits up. Additionally we sometimes submit the WU's with the wrong credits out of a small mistake in a input file (but we usually catch those). | |
ID: 34964 | Rating: 0 | rate: / Reply Quote | |
Thanks for the reply/Info. | |
ID: 34987 | Rating: 0 | rate: / Reply Quote | |
Hi! | |
ID: 35513 | Rating: 0 | rate: / Reply Quote | |
Yes 150% for WU's returned inside 24h, 125% for WU's returned between 24h and 48h, 100% thereafter (unless you get a resend and there is an issue - you get half credits if the original WU is returned, before you can complete the resend). | |
ID: 35514 | Rating: 0 | rate: / Reply Quote | |
Yes indeed. Here is the FAQ: http://www.gpugrid.net/forum_thread.php?id=2572 | |
ID: 35515 | Rating: 0 | rate: / Reply Quote | |
OK, thank you for the link. | |
ID: 35524 | Rating: 0 | rate: / Reply Quote | |
But, it's a pitty, I need 25h to give back a Noelia with my NVIDIA GeForce GTX 650 Ti... This looks little slow. I have two different GTX460 with 1Gb memory running under XP with the 334.89 drivers that take only around 16-17 hours on the last Noelia WU's they ran. Card 1 http://www.gpugrid.net/result.php?resultid=7837842 http://www.gpugrid.net/result.php?resultid=7812020 Card 2 http://www.gpugrid.net/result.php?resultid=7813942 http://www.gpugrid.net/result.php?resultid=7811015 If setup dedicated 24/7 and crunch even when computer is in use, I would think the 650Ti could do <24h turn around. | |
ID: 35530 | Rating: 0 | rate: / Reply Quote | |
[AF>Libristes] cottesloe, Your last Noelia WU took 81,042.66sec to run. This is less than 24h (86400sec). If you reduce your cache (Runtime Buffer) to 0.01 that should allow you to complete Noelia WU's inside 24h. The other WU's take longer 90K sec. If you reduce the number of CPU tasks you run it might decrease GPUGrid WU runtime, but that's just a guess. Other than the above, or upgrading it, all I can suggest is to increase fan speed, which would only prevent the card from downclocking due to heat (probably not an issue), or try to overclock the card (your on your own there). | |
ID: 35538 | Rating: 0 | rate: / Reply Quote | |
[AF>Libristes] cottesloe, Your last Noelia WU took 81,042.66sec to run. This is less than 24h (86400sec). If you reduce your cache (Runtime Buffer) to 0.01 that should allow you to complete Noelia WU's inside 24h. The other WU's take longer 90K sec. If you reduce the number of CPU tasks you run it might decrease GPUGrid WU runtime, but that's just a guess. Other than the above, or upgrading it, all I can suggest is to increase fan speed, which would only prevent the card from downclocking due to heat (probably not an issue), or try to overclock the card (your on your own there). Remember that you have to allow extra time for downloading and uploading, over and above the recorded elapsed (computing) time, if the entire "issue and return" cycle is to fit within the 24 hours. | |
ID: 35544 | Rating: 0 | rate: / Reply Quote | |
If you reduce the number of CPU tasks you run it might decrease GPUGrid WU runtime, but that's just a guess. It should be a good guess.. I'll try this next week... Bye | |
ID: 35558 | Rating: 0 | rate: / Reply Quote | |
I let one CPU free and I can see the diference... | |
ID: 35610 | Rating: 0 | rate: / Reply Quote | |
It was 120,000 credit for long runs about a week ago and now rarely see 120,000 credits. Every time I get my rig going well, things change for the worse. 50,000 by 50,000 will take a while to climb the ladder. | |
ID: 35981 | Rating: 0 | rate: / Reply Quote | |
It was 120,000 credit for long runs about a week ago and now rarely see 120,000 credits. Every time I get my rig going well, things change for the worse. 50,000 by 50,000 will take a while to climb the ladder. You should take account of the running time of the given workunit. Shorter workunits generate less credit, but they do it more often. However there is a small (~10%) variation in the credit/time ratio of the workunits. | |
ID: 35999 | Rating: 0 | rate: / Reply Quote | |
I would prefer a metric in which the credits per unit of computing were comparable to other BOINC projects that don't use GPU computing. | |
ID: 36091 | Rating: 0 | rate: / Reply Quote | |
I would prefer a metric in which the credits per unit of computing were comparable to other BOINC projects that don't use GPU computing. We all would. But it's much harder to measure (or to calculate) the actual flops done by a GPU task than a CPU task, because the GPU does many calculations simultaneously. As there is no standard for calculating the actual flops done by a GPU task (and probably there couldn't be such at all), different projects give different amount of credit per GPU time. It makes hard to put different GPU projects in comparison (especially with CPU projects), so the cruncher should not judge the amount of scientific contribution only by the credits given for GPU work. It's like comparing apples to bananas. We are crunching to aid research, not for the credits, right? After all, you couldn't do anything with the credits earned at any project, it's just for comparison between the contribution of the users of a given project, but not for the comparison between different projects. However, the much lager amount of credits earned by a GPU task correctly reflects the computing capability ratio between GPUs (parallel computing) and CPUs. | |
ID: 36093 | Rating: 0 | rate: / Reply Quote | |
Measuring CPU performance is fundamentally flawed. The actual usefulness/performance is related to both the CPU and the task. Nowadays some instruction sets offer significant improvements for some work type over older instruction types, but others don't. Then there is the supporting hardware (RAM/disk/bus/chipset). Faster RAM significantly improves performances for some WU's... The GPU is different again, and trying to compare NVidia and AMD is a waste of time; they are designed to do different things. | |
ID: 36112 | Rating: 0 | rate: / Reply Quote | |
I would prefer a metric in which the credits per unit of computing were comparable to other BOINC projects that don't use GPU computing. Be careful what you wish for, 'creditnew' could come to the projects using gpu's too if Dr. A has his way. That would most likely mean a LARGE drop in credits for most workunits at most projects. 'Creditnew' is Dr. A's idea of synchronizing credits across all projects, so equivalent work gives equivalent credit at EVERY project. That would mean no more 133,950.00 credits units!! | |
ID: 36140 | Rating: 0 | rate: / Reply Quote | |
It's not pretty and still needs some work but the present opt-out system facilitates a massive gulf in credits between some projects. This fundamentally undermines the concept of a credit system. I think it's been discussed elsewhere and over a long time but for GPU projects it would need to be based on app complexity, utilization, power usage, GFlops and take into account the relative performance of a similar app on a CPU. Fortunately all of these things can be measured with the exception of complexity which could be estimated and agreed upon. There are lots of reasons to crunch and credits generally isn't too high on the list, but the worst reason now is probably project wide Boinc credits because it isn't uniform and therefore misrepresents contribution. | |
ID: 36164 | Rating: 0 | rate: / Reply Quote | |
It's not pretty and still needs some work but the present opt-out system facilitates a massive gulf in credits between some projects. This fundamentally undermines the concept of a credit system. I think it's been discussed elsewhere and over a long time but for GPU projects it would need to be based on app complexity, utilization, power usage, GFlops and take into account the relative performance of a similar app on a CPU. Fortunately all of these things can be measured with the exception of complexity which could be estimated and agreed upon. There are lots of reasons to crunch and credits generally isn't too high on the list, but the worst reason now is probably project wide Boinc credits because it isn't uniform and therefore misrepresents contribution. AGREED, the current credit system only works for each project, not when comparing credits between projects. | |
ID: 36180 | Rating: 0 | rate: / Reply Quote | |
It's tricky enough keeping credits consistent inside this project. I don't even want to imagine the mess in comparing to others. | |
ID: 36183 | Rating: 0 | rate: / Reply Quote | |
It's tricky enough keeping credits consistent inside this project. I don't even want to imagine the mess in comparing to others. Don't even concern yourself with such issues, they are out of your hands. Focus on your research ;) ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 36190 | Rating: 0 | rate: / Reply Quote | |
Credit / WU is calculated based on runtime of a WU on one of our machines. So we run a WU on one of our GPUs check time required to run and multiply by credit/time. Then there is also the bonus for the long tasks and also the fact that we usually round the credits up. Additionally we sometimes submit the WU's with the wrong credits out of a small mistake in a input file (but we usually catch those). In this same system used for working out how much credit is given to the CPU work unit? | |
ID: 36864 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : Credits