Message boards : Number crunching : How do I tell GPUgrid to keep more than two work units in reserve?
Author | Message |
---|---|
At the moment I'm running GPUgrid on the GPU and World Community Grid on the CPU. My internet connection isn't very reliable, so I have the BOINC preferences set to "connect to the internet every 2 days" and work buffer 2 days also. And it works well with WCG, because I have about 30 "ready to start" tasks in the queue. | |
ID: 26326 | Rating: 0 | rate: / Reply Quote | |
The max number of WUs GPUGrid allows per GPU is 2 at a time. | |
ID: 26328 | Rating: 0 | rate: / Reply Quote | |
You mean regardless of my settings and the speed of my graphics card I will always have only one task on standby besides the one active task? Isn't there anything that can be done about this? | |
ID: 26330 | Rating: 0 | rate: / Reply Quote | |
Probably best to just select the long runs. | |
ID: 26332 | Rating: 0 | rate: / Reply Quote | |
You mean regardless of my settings and the speed of my graphics card I will always have only one task on standby besides the one active task? Isn't there anything that can be done about this? The project generates WUs based on the results of previous WUs and if the 2 WU max limit was raised the amount of work in the pipeline, the length of time to get usable results, attendant server overhead, and administration tasks would become unduly burdensome to the project staff. ____________ Thanks - Steve | |
ID: 26334 | Rating: 0 | rate: / Reply Quote | |
At the moment I'm running GPUgrid on the GPU and World Community Grid on the CPU. My internet connection isn't very reliable, so I have the BOINC preferences set to "connect to the internet every 2 days" and work buffer 2 days also. And it works well with WCG, because I have about 30 "ready to start" tasks in the queue. Same here! My two cards got starved again over the weekend. I think the maximum 2 (long) WUs policy should be revised on the Cuda 4.2 apps. They are way faster (as it was mentioned several times on the forms before), takes about 5.5 hours to finish PAOLA tasks on my machines. I think the max. WUs per GPU should be raised to 4 tasks each card, so it might sufficient work for 24 hours. This should not cause major disturbance in the project management, as the old, similar WUs with Cuda:3.1 took about 11 hours on my Fermi card, more or less double than the new ones. I mentioned this on a other forum with different perspective before, "Upload suspended - no new tasks":http://www.gpugrid.net/forum_thread.php?id=2090 "[...]As the new CUDA4.2 WUs are much faster (Paola on my computer 5.5 hours) than the old CUDA3.1 WUs, the second WUs has completed before the first finished WUs has completely uploaded, so my machines try to upload the two WUs in parallel, at this moment my two GPUs sit idle with no new work, bad for the project and my credits;-) as it seems that there are send only a max. of two WUs of GPUGRID per computer at any given moment.[...]" | |
ID: 26337 | Rating: 0 | rate: / Reply Quote | |
Probably best to just select the long runs. I was wondering that too, until I saw the size of the file being uploaded. 67,644 KiB ??!! | |
ID: 26341 | Rating: 0 | rate: / Reply Quote | |
Probably best to just select the long runs. Long runs take about 5-6 hours, so that's only 10-12 hours of work. My internet connection has a mysterious habit of intermittently cutting out. We've replaced the modem and its power supply, ISP says the phone lines are ok but still to no avail. Perhaps the best thing to do is to switch ISP when our contract runs out. | |
ID: 26342 | Rating: 0 | rate: / Reply Quote | |
Might be modem/router specific, so changing ISP could help, if they supplied the modem/router. Perhaps they are assigning an IP address for a very short time period or some TTL value is low. Have you tried keeping a web page open? One with meta refresh (msn, yahoo or even a streaming news station). | |
ID: 26343 | Rating: 0 | rate: / Reply Quote | |
I was wondering that too, until I saw the size of the file being uploaded. probably about time to implement 7-zip compression of the results before upload. | |
ID: 26344 | Rating: 0 | rate: / Reply Quote | |
The more you compress a file, the longer it takes to uncompress it. Could the server handle that workload? | |
ID: 26349 | Rating: 0 | rate: / Reply Quote | |
The more you compress a file, the longer it takes to uncompress it. Could the server handle that workload? that's not the problem. compression at high rates takes a lot more time than decompression. the problem here are ADSL-uplinks. downlink is fast and uplink oh that slow. in my case right here: 16 mbit vs. 1 mbit. but you might want to try 7-zip and compress a result-file locally. it should perform a lot better than boincs built in zip-compression http://boinc.berkeley.edu/trac/wiki/FileCompression during upload. of course this puts a lot of load on the server already. btw.: 7-zip is open-source.. | |
ID: 26352 | Rating: 0 | rate: / Reply Quote | |
If it's better than what is presently used by Boinc then it should replace what is being used by Boinc. Otherwise GPUGrid (and any other project) would have to bypass the Boinc compression system. Presumably the Boinc server would also have to be bypassed. The next server upgrade would represent the problem with any such bespoke compression implementation and it might cause cross-boinc compatibility issues. | |
ID: 26353 | Rating: 0 | rate: / Reply Quote | |
If it's better than what is presently used by Boinc then it should replace what is being used by Boinc. Otherwise GPUGrid (and any other project) would have to bypass the Boinc compression system. Presumably the Boinc server would also have to be bypassed. The next server upgrade would represent the problem with any such bespoke compression implementation and it might cause cross-boinc compatibility issues. well, this is a really old topic. http://boinc.bakerlab.org/rosetta/forum_thread.php?id=2992&sort=7 but afaik there are several projects that have decided to move on their own.. | |
ID: 26355 | Rating: 0 | rate: / Reply Quote | |
I was wondering that too, until I saw the size of the file being uploaded. That's convenient. The one I posted about came from 1E2I_39_2-PAOLA_1E21_APS-1-100-RND0032_1, but I read this just in time to catch 2HDQ_36_1-PAOLA_2HDQbis-2-100-RND8482_1 The big file is _4, and the sizes are: Upload format: 39,929,396 bytes .zip (WinXP compression): 36,973,370 bytes .7z (Windows 7-zip v4.57): 33,139,008 bytes 17% compression? I don't think it's worth it. | |
ID: 26356 | Rating: 0 | rate: / Reply Quote | |
You can try increase BOINCmanager "Minimum Work Buffer" ( 2 days ) and "Max Additional work buffer" ( 4 days ). | |
ID: 26357 | Rating: 0 | rate: / Reply Quote | |
Might be modem/router specific, so changing ISP could help, if they supplied the modem/router. Perhaps they are assigning an IP address for a very short time period or some TTL value is low. Have you tried keeping a web page open? One with meta refresh (msn, yahoo or even a streaming news station). Yeah, they supplied both routers. Keeping gmail open in the background doesn't seem to help, but I try to do that with msn and see how it goes. Thanks for the suggestion! | |
ID: 26358 | Rating: 0 | rate: / Reply Quote | |
You can try increase BOINCmanager "Minimum Work Buffer" ( 2 days ) and "Max Additional work buffer" ( 4 days ). Tried that, doesn't seem to work. The other problem is that I have world community grid also running on the same client, so I'd get some 60 WCG tasks if I set those settings and leave them :) | |
ID: 26359 | Rating: 0 | rate: / Reply Quote | |
It does not work increase min-max work buffer, the project limits the maximum send WUs to two per GPU at every given moment. As I have written before, it seems to me, the best solution would be to increase the WU limit to 4, as cude4.2 app are 30 to 40% or even more faster than cuda3.1 app on GPUGRID. | |
ID: 26360 | Rating: 0 | rate: / Reply Quote | |
The best solution might be a trickle upload, or some sort of task on task local generation, with uploads after each. | |
ID: 26363 | Rating: 0 | rate: / Reply Quote | |
Message boards : Number crunching : How do I tell GPUgrid to keep more than two work units in reserve?