Advanced search

Message boards : Number crunching : PAOLA_3EKO_8LIGANDS very low GPU load

Author Message
flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26648 - Posted: 22 Aug 2012 | 1:09:43 UTC
Last modified: 22 Aug 2012 | 1:11:05 UTC

PAOLA_3EKO_8LIGANDS

On both my GTX670's, I'm averaging 30% GPU load and the core has downclocked to 1005MHz. At that rate, it looks like it's going to take 12 to 14 hours to completion for those wu's (this is the first time I've had these work units). Anyone else having these problems?

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 11,185,430,718
RAC: 14,875,289
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26649 - Posted: 22 Aug 2012 | 2:00:22 UTC

I have 3 of these units running now and yes, they are very, very slow, around 30% utilization on windows 7, and 37% on windows xp.

valterc
Send message
Joined: 21 Jun 10
Posts: 21
Credit: 8,613,339,672
RAC: 26,076,696
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26650 - Posted: 22 Aug 2012 | 9:35:34 UTC

Same behavior with a gtx570, 38% after 5 hours, another 6 estimated, gpu usage at ~48%

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26651 - Posted: 22 Aug 2012 | 10:55:11 UTC

Same problem. Cuda31 285gtx 26 hours to complete at 46% gpu load and 33% cpu load
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile nate
Send message
Joined: 6 Jun 11
Posts: 124
Credit: 2,928,865
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 26657 - Posted: 22 Aug 2012 | 15:34:30 UTC

For this group, here is a list of the GPU and runtime for the 22 most recently returned WUs:

GeForce GTX 680 - 8.6 hours
GeForce GTX 680 - 16.9 hours
GeForce GTX 680 - 15.8 hours
GeForce GTX 680 - 15.1 hours
GeForce GTX 680 - 14.8 hours
GeForce GTX 670 - 16.5 hours
GeForce GTX 670 - 14.0 hours
GeForce GTX 670 - 13.3 hours
GeForce GTX 590 - 15.1 hours
GeForce GTX 590 - 14.8 hours
GeForce GTX 590 - 12.8 hours
GeForce GTX 580 - 9.6 hours
GeForce GTX 580 - 15.3 hours
GeForce GTX 580 - 13.6 hours
GeForce GTX 570 - 22.3 hours
GeForce GTX 570 - 18.0 hours
GeForce GTX 570 - 15.4 hours
GeForce GTX 570 - 14.7 hours
GeForce GTX 560 - 15.8 hours
GeForce GTX 560 - 16.4 hours
GeForce GTX 470 - 17.5 hours
GeForce GTX 470 - 17.3 hours

Lots of long runtimes, and a lot of variablity. Two 470s running almost as fast as a 680? The ones in bold are what we were expecting based on benchmarks here, and show what "ideal" conditions. Any ideas as to why so many are running slow? It's obviously not good that so many fast cards/machines are running slow for some reason. Hmmm...

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,513,623,465
RAC: 1,110,067
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26658 - Posted: 22 Aug 2012 | 15:42:08 UTC

GTX460 768MB 62% GPU usage completed in 19.14

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,724,232,842
RAC: 1,455,832
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26660 - Posted: 22 Aug 2012 | 16:27:03 UTC
Last modified: 22 Aug 2012 | 17:18:34 UTC

GTX670: 95.460% 16:32:28 hores runtime.
GPU load between: 27% and 31%, CPU load 9% on an AMD octocore. One core is recerved for GPUGRID and work, the other 7 are doing Climateprediction.net.

UP-DATE: the WU was complited after 17:22:56 hours.

Hope this helps.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26661 - Posted: 22 Aug 2012 | 16:48:12 UTC
Last modified: 22 Aug 2012 | 16:51:48 UTC

Observationally on GTX670 Win7x64 the CPU usage is at a lower ratio on these PAOLA_8LIGANDS (1 GPU sec to .7 CPU sec) than the NATES (1:1).
I guessing that the GPU is starving while waiting for the CPU. I don't know how you have it coded (CPU is polling or the GPU is placing a call) but if this is configurable any chance you could switch it so we can test this for you?

So maybe we can look at general system states to make sure that the extra processing runtimes are not caused by the unexpected requirements that these computationally difficult WUs have? Anyone at their system at this time can you take a look at how much GPU and also system RAM is being used and how much is reported as free for each? How about cpu page faults?

As always, please include OS and any other pertinant system details.

I'd be happy to test anything new or even crunch them as they are for the project but please throw us crunchers a bone and up the points per WU :-)
____________
Thanks - Steve

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26663 - Posted: 22 Aug 2012 | 17:33:52 UTC

nate got me to thinking, it's taking the same amount of time on my GTX 560's and GTX 550's as it does on my GTX 670's. 1344 cuda cores vs 192 and it takes almost an identical amount of time, somethings wrong.

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,513,623,465
RAC: 1,110,067
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26664 - Posted: 22 Aug 2012 | 20:31:41 UTC
Last modified: 22 Aug 2012 | 20:39:14 UTC

I run Docking@home in addition to GPUGRID on my Phenom II X6. I suspended Docking to observe how much CPU resources was being consumed by my GTX460/GTX550Ti whilst these units are running and it turned out to be 33%.

Reduced Docking down to 4 cores to see what effect it has.

460 running at 60% GPU usage and 550Ti running at 99%

Win XP

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26665 - Posted: 22 Aug 2012 | 21:58:52 UTC - in response to Message 26664.

Reduced Docking down to 4 cores to see what effect it has.

460 running at 60% GPU usage and 550Ti running at 99%


Wow, you just now figured that out? Maybe I'm not understanding what you wrote, I thought everybody knew that you need to leave 1 CPU core free for every GPU you're running. The CPU core feeds the data to the GPU, it's going to be like Christmas for you now, you're crunching times should drop sharply.



____________

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,513,623,465
RAC: 1,110,067
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26666 - Posted: 22 Aug 2012 | 22:43:14 UTC - in response to Message 26665.

Reduced Docking down to 4 cores to see what effect it has.

460 running at 60% GPU usage and 550Ti running at 99%


Wow, you just now figured that out? Maybe I'm not understanding what you wrote, I thought everybody knew that you need to leave 1 CPU core free for every GPU you're running. The CPU core feeds the data to the GPU, it's going to be like Christmas for you now, you're crunching times should drop sharply.


You don't need to leave 1 CPU core free per GPU, but it helps. I have chosen to run 1 free core for 2 GPU. I find it is the most efficient was to use my resources.

Docking is my main project....... 20th overall. GPUGRID is only a side project for me that compliments D@H very well, they are both in the same area of research.

Used to run 6 cores Docking at the same time as GPUGRID, wasn't bothered about how much it affected GPUGRID crunch times but when I started running 2 GFX cards on GPUGRID it started hitting D@H crunch times, so I reduced to 5 cores which didn't really affect my D@H RAC.

Whilst these CPU intensive PAOLA units are around I'll run D@H on 4 cores, then I'll probably return to 5.

Previous to these PAOLA units I was getting ~96% CPU sage overall with 5 D@H units and 2 GPUGRID WU running. Currently seeing 97% with 4 D@H + 2 PAOLA WU.



flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26667 - Posted: 22 Aug 2012 | 23:03:07 UTC - in response to Message 26666.
Last modified: 22 Aug 2012 | 23:09:19 UTC

I figured you must have known, I get 0.645% CPU usage per core on my GTX 670's, using 2 cards in the same machine(do the math, that's more than one CPU core). I'm leaving 2 of my 8 cores free and the GPU usage shot up to 98% on both cards.

Before that it jumped all over the place between 35 and 65% and the wu runs were hours longer (I was a n00b, I didn't know). I run CPDN on all other cores, when their servers go down (witch is often) I run docking too. I like those little 3 hour jobs, for some reason it wont link to my BOINCstats, I must have done something wrong there too.

Edit: My GTX670 took 18.27.15 and produced 82.54MB of data

GTX560Ti took 19.24.51 and produced 82.65MB of data
____________

TheFiend
Send message
Joined: 26 Aug 11
Posts: 99
Credit: 2,513,623,465
RAC: 1,110,067
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26668 - Posted: 22 Aug 2012 | 23:58:29 UTC - in response to Message 26667.
Last modified: 22 Aug 2012 | 23:59:09 UTC

I figured you must have known, I get 0.645% CPU usage per core on my GTX 670's, using 2 cards in the same machine(do the math, that's more than one CPU core). I'm leaving 2 of my 8 cores free and the GPU usage shot up to 98% on both cards.

Before that it jumped all over the place between 35 and 65% and the wu runs were hours longer (I was a n00b, I didn't know). I run CPDN on all other cores, when their servers go down (witch is often) I run docking too. I like those little 3 hour jobs, for some reason it wont link to my BOINCstats, I must have done something wrong there too.

Edit: My GTX670 took 18.27.15 and produced 82.54MB of data

GTX560Ti took 19.24.51 and produced 82.65MB of data


I see you run Bulldozers. On Docking the 6 core Phenom II's outperform the 8 core Bulldozers. One of the reasons I have stuck with my old skool Phenoms

My main cruncher - a Phenom II X6 1055T i run overclocked at 3.5GHz, at that speed running 5 cores on Docking is just as effective as running 6 cores at 2.8GHz.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26670 - Posted: 23 Aug 2012 | 6:56:35 UTC

These workunits do not use a full CPU core with Kepler GPUs, unlike any previous workunits. It's like the late swan_sync parameter wasn't set to 0. These workunits run twice as fast on my GTX 480s than on my GTX 680s.

WinXP x64, Core i7 980x and 970 (@ 4.16GHz) one free CPU thread per GPU, GTX 680s GTX 690 and GTX 480

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26671 - Posted: 23 Aug 2012 | 13:59:09 UTC

I'm running Windows 7x64 with a single GTX 670 and an i7-3770K overclocked to 4.2GHz with hyperthreading enabled (i.e. 8 logical cores). My card is the factory-overclocked triple-fan Gigabyte 670. All factory default settings. I'm running BOINC 7.0.28. No cc_config, no swan_sync.

I've run one Paola 3EKO task so far and the result was validated yesterday. Here are the specs:
Run Time (secs):48,205.04
CPU Time (secs):30,510.69
Credit :88,075.00

I'm currently running another Paola 3EK0, and the GPU load right now is fluctuating between 44-46% with the core clock at 979.8 MHz. This is while also running 8 tasks of World Community Grid on this computer at the same time, so I haven't dedicated a core to GPUGrid (it's sharing cores with WCG).

If I suspend WCG completely in BOINC (thus leaving the *entire* CPU for GPUGrid, the GPU load rises to 53%.

If I set BOINC to use 87.5% of processors (that's equal to 7 out of 8 cores), it shuts down 1 WCG task (so only 7 remain running), but GPU load remains 45%.

If I set BOINC to use 75.0% of processors (that's equal to 6 out of 8 cores), it shuts down two WCG task (so only 6 remain running), but GPU load still remains 45%.

If I set boinc to use 50% of processors... ditto.

With WCG shut down, the GPUGrid task, in task manager (Ctrl+Alt+Delete) reports a CPU usage of 6-7%, so in my case that's only half a core. The priority is set to "BelowNormal" by default, whilst for WCG each task has a priority of "Low" when running. I don't understand it. It seems that the Paola tasks are affected only by the fact that another project is running - not by how much CPU that project takes. So you have to suspend ALL other tasks *completely* to see a slight performance increase.

Nathan tasks have 90% GPU utilisation regardless of what else is running. These Paola tasks are using only half my GPU and returning about half the points per unit time. According to my results, I get about 4 points per second for Nathan, one point per second for Noelia, and 2 points per second for the new Paola ones. I'm not particularly happy to be running at half throttle when I know I could get more points per unit time doing other work. In fact I've started considering other projects like primegrid or POEM@home to maybe increase my points per hour. I guess I'll just see how things unfold and take it from there.

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26672 - Posted: 23 Aug 2012 | 14:33:21 UTC
Last modified: 23 Aug 2012 | 14:33:33 UTC

does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile nate
Send message
Joined: 6 Jun 11
Posts: 124
Credit: 2,928,865
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 26673 - Posted: 23 Aug 2012 | 17:04:40 UTC - in response to Message 26672.
Last modified: 23 Aug 2012 | 17:05:08 UTC

does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it.


I feel it must be something like this, because there are some users who can compute much faster than the rest (and at speeds we were expecting). Keep us updated dskagcommunity.

If anyone else wants to play with the setting, click on your username up above, then "GPUGRID preferences". "Edit Preferences", and change "Maximum CPU % for graphics..." to 100% (or whatever you prefer).

Still, this might not be it. Wouldn't explain this, though, unless the cards are on different machines with different settings...

These workunits do not use a full CPU core with Kepler GPUs, unlike any previous workunits. It's like the late swan_sync parameter wasn't set to 0. These workunits run twice as fast on my GTX 480s than on my GTX 680s.


Let's see...

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26674 - Posted: 23 Aug 2012 | 18:06:50 UTC - in response to Message 26671.
Last modified: 23 Aug 2012 | 18:24:00 UTC

If I set BOINC to use 87.5% of processors (that's equal to 7 out of 8 cores), it shuts down 1 WCG task (so only 7 remain running), but GPU load remains 45%.

If I set BOINC to use 75.0% of processors (that's equal to 6 out of 8 cores), it shuts down two WCG task (so only 6 remain running), but GPU load still remains 45%.

If I set boinc to use 50% of processors... ditto.


I've seen people advise this action here and at other BOINC forums and it seems to me that this would never work because telling BOINC to use 6 of 8 cores or 7 of 8 cores takes them away from all projects. I would think you would want to set CPU usage at you're GPUgrid account, by taking away cores in BOINC, only the operating system or programs not connected to BOINC can utilize those cores.

I don't think the prefrences in our GPUGRID account allows for enough minipulation of the CPU to make it do what you want.
____________

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26675 - Posted: 23 Aug 2012 | 19:48:26 UTC

Here's the one thing I've been able to find in common with all my video cards, the GPU memory controller stays right around 10%, it will drop to 9% and go up to 11% but never higher or lower on the 3EKO wu's. Also, they are all using the same amount of memory (+ or - 1%) witch is 628MB, these are my cards:

2 EVGA GTX550Ti 1024MB GDDR5
3 EVGA GTX560 1 being a Ti all have 1024MB GDDR5
2 EVGA GTX670FTW 2048MB GDDR5

I know the CPU feeds data to the VRAM, right? If there is a bottle neck in the way the data is using the VRAM, that will slow down the CPU and GPU both. Everybodys cards have different GPU clock speeds, so wouldn't they down clock if there was a data bottle neck.

Could it have something to do with the way those wu's are utilizing the VRAM? I think the problem is in that area some where.
____________

klepel
Send message
Joined: 23 Dec 09
Posts: 189
Credit: 4,724,232,842
RAC: 1,455,832
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26676 - Posted: 23 Aug 2012 | 20:57:23 UTC - in response to Message 26674.

I've seen people advise this action here and at other BOINC forums and it seems to me that this would never work because telling BOINC to use 6 of 8 cores or 7 of 8 cores takes them away from all projects. I would think you would want to set CPU usage at you're GPUgrid account, by taking away cores in BOINC, only the operating system or programs not connected to BOINC can utilize those cores.

I don't think the prefrences in our GPUGRID account allows for enough minipulation of the CPU to make it do what you want.


Does work for me! I set max CPU utilization to 99% on my AMD FX8150, and on the 7 of the 8 cores crunch climateprediction.net WUs and one core makes my GTX670 happy. Nice side effect my system is more stable and does not hamper workflow that much.

Under Windows Work Administrator/Process (ctrl alt delete) I can see that all the cores are used to their maximum by BOINC (13% with about 70,000 to 130,000 KB Memory utilization for CP and 9% and 192,000 KB for this EKO – PAOLA WUS we are talking about (for NATHAN-WUs this is normally 13% as well and about 230,000KB)).

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26677 - Posted: 23 Aug 2012 | 21:38:12 UTC - in response to Message 26676.

That just don't make no sense at all. There is no way in BOINC to allocate cores to particular work units, that setting is for freeing up CPU power for the OS. If you're wu's use less than .5 CPU power, you wont see issues, anything over that and you have to suspend wu's. I don't know why I'm responding to this, I feel like I'm walking into another one.
____________

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26678 - Posted: 23 Aug 2012 | 21:46:26 UTC

I've set my GPUGrid preferences to use 100% CPU for graphics but I *think* this refers to how much CPU to use for displaying a project's screensaver ... I'm going to do a quick taks switch to see ...

GPU = GTX 480 Win7 x64, shaders @1512, mem @1848
CPU = Core i7-980 CPU clocked at 4.050 HT = ON
RAM = 6GB triple channel @ 1500 DDR3
BOINC = 7.0.28 set to use 100% of processor
because I am runing both a GTX670 and a GTX 480 BOINC forces so a full thread dedicated to GPU.
OTHER PROJECTS = 11 threads to WCG


NATE -- GPU @95-96% utilization, MCU @24-25%
PAOLA - GPU @59-61% utilization, MCU @6%
- occasionally GPU jumps to 70+%

I'll let this run for a while to get a good estimate of runtime to compare to previous run that I'll double check but I think it took 14 hrs.

side note a PAOLA too 25 hours on the 660Ti



____________
Thanks - Steve

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26679 - Posted: 23 Aug 2012 | 22:29:27 UTC - in response to Message 26678.
Last modified: 23 Aug 2012 | 22:32:36 UTC

You are correct, these CPU preferences are just for the screen saver and nothing to do with how much CPU is used to support a GPU.

When people set Boinc to use 6/8 threads this means Boinc will use 6 CPU cores/threads to crunch CPU project, and it means the remaining 2 can be used for GPU projects (which are not considered as CPU projects, in this respect).

We need to better isolate this problem.
Is the problem that some tasks just run slow on all cards randomly (a task issue) or just GF600 cards?
Does the number of available CPU cores influence the issue on all cards, or just some?
Is the CPU type of importance?
Do these tasks require increased disk read/write (SSD vs standard SATA HDD), or high memory I/O?
Does changing the Boinc write to disk settings make any difference? (Tools, Computer Preferences, Disk and memory usage, tasks checkpoint every..., ->900sec)
Are CPU tasks causing issues?
It might be worth looking at the bus utilization on different boards (PCIE3/2/1.1).
Boinc versions might be worth noting, and operating system XP/Vista/W7/Linux.

So post up a few more details and we might be able to narrow it down. Also say if Aero is on, or not installed.

My impression is that it's more of an issue with GF600 series. It's as if tasks are running on the 3.1app (which is not compatible with GF600), so perhaps there is some legacy code being used.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26680 - Posted: 23 Aug 2012 | 23:01:13 UTC - in response to Message 26678.

I've set my GPUGrid preferences to use 100% CPU for graphics but I *think* this refers to how much CPU to use for displaying a project's screensaver ... I'm going to do a quick taks switch to see ...

GPU = GTX 480 Win7 x64, shaders @1512, mem @1848
CPU = Core i7-980 CPU clocked at 4.050 HT = ON
RAM = 6GB triple channel @ 1500 DDR3
BOINC = 7.0.28 set to use 100% of processor
because I am runing both a GTX670 and a GTX 480 BOINC forces so a full thread dedicated to GPU.
OTHER PROJECTS = 11 threads to WCG


NATE -- GPU @95-96% utilization, MCU @24-25%
PAOLA - GPU @59-61% utilization, MCU @6%
- occasionally GPU jumps to 70+%

I'll let this run for a while to get a good estimate of runtime to compare to previous run that I'll double check but I think it took 14 hrs.

side note a PAOLA too 25 hours on the 660Ti



1 hour of processing and it is 4.2% complete
GPU utilization hanging around 70%, GPU mem still at 6%
CUDA = 4.2 app
DRIVER = 301.42
MOBO PCIE2 @8X
All memory stats running nominal differences to NATE
--- (usage, private, pool, paged, non-paged)
Page faults are high (compared to NATE) @ 173k after 1 hour
No hard faults

Let me know if there are any other details I can help with :-)
____________
Thanks - Steve

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26681 - Posted: 23 Aug 2012 | 23:20:24 UTC

I have 3 machines that are identical,

AMD 8150 3.6GHz 8 core
16GB Kingston DDR3 PC1600
Asus M5A97 AM3+
Seagate Constellation Enterprise SATA6 7200RPM 1TB
OCZ 750 watt PSU
Everything water cooled (CPU, Video)

1 rig has 2xGTX670FTW 2GB 1533 shaders GPU usage at 30%
1 rig has 2xGTX560 1GB 336 cuda cores GPU usage at around 40%
1 rig has 1xGTX560Ti 1GB 384 cores GPU usage at 35% and 1xGTX550Ti 192 shaders GPU usage at 55%

All the memory controllers are between 9 and 11%

Windows XP Pro x64 SP2

BTW, all my cards run much cooler, if there's anything I forgot, just ask.

____________

Rayzor
Send message
Joined: 19 Jan 11
Posts: 13
Credit: 294,225,579
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26682 - Posted: 23 Aug 2012 | 23:40:53 UTC

From some quick testing on my part.

GTX 480, i3 530, 4GB ram, Windows 7 Ultimate, using the late swan_sync parameter, and with one core free, the GPU usage had fallen to around 40% to 50% usage. BOINC CPU usage was set at 75%.

The other 3 cores were busy with Seti, and 1 core was dedicated to the GPU, and Task Manager showed the CPU was at 100% usage.

After freeing another one of the cores on the i3 530(2 cores plus 2 hyper threaded cores, total of 4 cores), and setting the acmed.win.23 Priority to High, the GPU has remained rock steady, at 93% to 94% usage.

Task Manager CPU usage is now showing appox. 81% usage. This is with 2 cores on Seti, 1 core for GPU, and now 1 core free for the PC.

For me BOINC CPU usage has to be set at 50% or less, in order for high GPU usage.

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26683 - Posted: 24 Aug 2012 | 7:32:40 UTC
Last modified: 24 Aug 2012 | 7:35:49 UTC

Regarding the setting I mentioned ("on multiprocessors use at most xxx% of processors" - which I set to 87.5 to use 7 out of 8 cores), that only applies to tasks that use the CPU exclusively (like WCG). Tasks that use the GPU ignore that setting - they simply use as much GPU as possible, and the associated amount of CPU needed. For GPUGrid, that's 1 task per GPU (since each task uses 1 NVIDIA GPU), and hence 0.585 CPU cores per GPU task. For something like Einstein@Home, which uses 0.5GPUs per task, it runs 2 GPU tasks simultaneously, and consumes however much total CPU two Einstein GPU tasks need.

For POEM@Home, each task uses 1CPU+1GPU, so it's 1 cpu core per GPU task (ouch). In each case, GPU projects get to use as much CPU as they need, and it's only the other (non-GPU) projects that BOINC limits to the percentage specified.

Now, as for the Paola_3EKO, my guess is that they're not coded to make full use of the GPU. I would suggest trying to run multiple Paola's on each GPU by making an app_info.xml file. Problem is that since this is a limited run, I don't have any more Paola's to test with. And BOINC discards all tasks in the queue when you introduce an app_info.xml file, so if you're reading this post and you have paola tasks in your queue, you still can't create a custom app_info and try it because you'll lose the tasks currently in the queue...

So looks like this problem will solve itself (because the limited number of Paola 3EKO tasks dispatched will all be completed) before we get the chance to try fixing it.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26684 - Posted: 24 Aug 2012 | 8:23:13 UTC - in response to Message 26680.

Page faults are high (compared to NATE) @ 173k after 1 hour


Suggests that something is not being kept in memory that should be, and is repeatedly being read from disk (which would obviously be a Lot slower). Maybe this is a CPU process for the GPU? Having an SSD would mask this to some extent - you would experience the same issue but not as severely.

Having more RAM available or faster RAM might also reduce this somewhat, but it sounds like a systemic issue. Sort of explains why Luke only noticed an increase from 45% to 52% when they stopped running all CPU tasks.

The more GPU's a system has the more this is a problem, and the more CPU projects are running (generally) worsens it. On a 12thread system I wouldn't use more than 10, if supporting 2GPU's. Very much depends on the CPU project too; some CPU projects eat memory (and 6GB isn't enough) while others use 10 to 100MB. Some also have extremely high I/O requirements. Both RAM shortages and high I/O are known to negatively impact upon GPU projects.

If I had 4 GPU's in a system, I probably wouldn't run any CPU projects.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26685 - Posted: 24 Aug 2012 | 8:27:06 UTC - in response to Message 26673.

does anyone has default max 20% CPU time for GPU work set on the websiteprofiles too like me until today? perhaps this value is to low for these new units? i set it to 100% and waiting now until i get a new one of this WUs and finished it.


I feel it must be something like this, because there are some users who can compute much faster than the rest (and at speeds we were expecting). Keep us updated dskagcommunity.

If anyone else wants to play with the setting, click on your username up above, then "GPUGRID preferences". "Edit Preferences", and change "Maximum CPU % for graphics..." to 100% (or whatever you prefer).

Still, this might not be it. Wouldn't explain this, though, unless the cards are on different machines with different settings...

These workunits do not use a full CPU core with Kepler GPUs, unlike any previous workunits. It's like the late swan_sync parameter wasn't set to 0. These workunits run twice as fast on my GTX 480s than on my GTX 680s.


Let's see...


so it doesnt had any effect. :/

____________
DSKAG Austria Research Team: http://www.research.dskag.at



Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26686 - Posted: 24 Aug 2012 | 9:42:01 UTC - in response to Message 26684.

Having more RAM available or faster RAM might also reduce this somewhat, but it sounds like a systemic issue. Sort of explains why Luke only noticed an increase from 45% to 52% when they stopped running all CPU tasks.


If it helps, I have 8GB RAM (of which BOINC is allowed to use 90%, or 7GB, when the computer is in use) running at 1600 MHz. I have all of Windows, and the BOINC executable, on a 128GB SSD, but my BOINC data folder is on a 2TB hard drive (both drives use the motherboard's two SATA 6GB/s ports that come from the Z77 chipset). The HDD does 150 MB/s in HDtune and the SSD does 400 MB/s.

I'd tell you my page and hard faults, but I'm not being given Paola tasks - they seem to have run out and we're back to Nathan tasks. Right now with 8 WCG tasks and one GPUgrid nathan running, I'm seeing 0 hard faults per second and 56% physical memory utilisation. Gpugrid made 156,000 page faults in 35 minutes. Just to put that in context, flash player made 25 million page faults in 7 minutes of CPU time (about 3 hours of youtube videos...).

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26688 - Posted: 24 Aug 2012 | 21:53:24 UTC

I don't have time do all the stats but on Win7x64 core i7-920 HT ON, 6 GB ram, Boinc 7.25, ALL CPU tasks suspended my GTX 670 runs at about 45%
When I run with only 1core free it drops to about 32%
____________
Thanks - Steve

Profile vitalidze
Send message
Joined: 15 May 09
Posts: 20
Credit: 239,712,351
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26694 - Posted: 25 Aug 2012 | 11:12:14 UTC

i5 2500k/8Gb 1333Mgh/Asus p67/GF680GTX Win7 Ultimate 304.48
all in default 38-42% gpu utilization

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26706 - Posted: 25 Aug 2012 | 23:20:07 UTC

Possible solution - see post #2 in this thread:

http://www.gpugrid.net/forum_thread.php?id=3118

Profile nate
Send message
Joined: 6 Jun 11
Posts: 124
Credit: 2,928,865
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwat
Message 26716 - Posted: 27 Aug 2012 | 17:52:28 UTC

Well, I think I've found the general cause, although I can't say I have a solution yet. When I run the workunits on our machines, NOT via BOINC, the simulations use 100% of the CPU. When I run the PAOLA WUs via BOINC, the max I get is ~50% CPU usage. No doubt that's where the 2x slowdown comes from. Now, why that happens, I have to find out. I'll ask the more technical people here and hopefully have answers soon, but if anyone knows why CPU usage is limited to 50% via BOINC, feel free to explain. Is this something common for other GPU based tasks, or specific to us?...

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26717 - Posted: 27 Aug 2012 | 18:09:43 UTC

Perhaps it would be enough when the project use 1 cpu insteed 0,65?
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26718 - Posted: 27 Aug 2012 | 19:04:51 UTC - in response to Message 26716.

The CPU utilisation is about 7-10% on my 8-core CPU (so that's 50-75% of one of the cores), but I think you guys made it that way by design because it uses 0.585 or 0.65 CPUs (can't remember because no running any right now).

DistRTGen has about as much CPU utilisation as GPUGrid. POEM@Home uses an entire CPU core for every running GPU task.

GPUGrid's Nathan tasks still use 7-10% CPU but achieve a GPU load of 90-95% on my GTX 670.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26750 - Posted: 30 Aug 2012 | 22:04:24 UTC - in response to Message 26716.

Well, I think I've found the general cause, although I can't say I have a solution yet. When I run the workunits on our machines, NOT via BOINC, the simulations use 100% of the CPU. When I run the PAOLA WUs via BOINC, the max I get is ~50% CPU usage. No doubt that's where the 2x slowdown comes from. Now, why that happens, I have to find out. I'll ask the more technical people here and hopefully have answers soon, but if anyone knows why CPU usage is limited to 50% via BOINC, feel free to explain. Is this something common for other GPU based tasks, or specific to us?...

Not sure how relevant this is but back quite a while ago we were using the SWAN_SYNC environmental variable to tell ACEMD to fire off a process that used a full CPU to poll the GPU rather than waiting for the GPU to make a call and the inherant latencies that involved ... then we were told that we no longer needed SWAN_SYNC as that was now baked directly into ACEMD. Perhaps that was done through a configuration mechanism in the WU generation process that got missed this time around?
____________
Thanks - Steve

Niels Kornoe
Send message
Joined: 29 Aug 08
Posts: 1
Credit: 133,018,242
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26756 - Posted: 2 Sep 2012 | 7:42:01 UTC

Are I'm the only one who abort every "PAOLA_3EKO" task I get?

My GPU goes below 400MHz and the task time goes up to ~ 18H.
That is more than double the time it should take, and I could have done more than 2 "NATHAN_RPS1120801" tasks in that amount of time.

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26760 - Posted: 2 Sep 2012 | 9:18:35 UTC - in response to Message 26756.

Are I'm the only one who abort every "PAOLA_3EKO" task I get?

My GPU goes below 400MHz and the task time goes up to ~ 18H.
That is more than double the time it should take, and I could have done more than 2 "NATHAN_RPS1120801" tasks in that amount of time.


You really shouldn't do that. Remember that they rely on you and other volunteers to do the crunching for their research. If everyone aborted certain kinds of tasks, they'd never get any research done. If you're concerned about low utilisation, I suggest using a custom app_info.xml - I posted about it a few posts ago in tis thread.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26761 - Posted: 2 Sep 2012 | 9:24:14 UTC

Clearly there is not an easy fix or it would have been done by now. Apparently even changing the points award is not something to be taken lightly once released so let's see what we can do to get the complete stream finished. Who knows, maybe this is some truely awesome data that will help Paola advance her research by leaps and bounds.

Can anyone who is running these and getting good utilization please post up the rig specs so we can see what DOES work well?

If it would be helpful overall, I would like to offer to be a volunteer as an alpha tester for any new WU streams (660Ti Win7x64 + 480 same rig, 670 Win7x64).
____________
Thanks - Steve

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26764 - Posted: 2 Sep 2012 | 9:40:28 UTC - in response to Message 26761.

Can anyone who is running these and getting good utilization please post up the rig specs so we can see what DOES work well?

I'm curious about it as well. However, I don't expect any answer to this question, because these tasks' GPU utilization is low even on PCIe3.0 systems.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26765 - Posted: 2 Sep 2012 | 9:50:45 UTC - in response to Message 26764.
Last modified: 2 Sep 2012 | 10:39:25 UTC

Can anyone who is running these and getting good utilization please post up the rig specs so we can see what DOES work well?

I'm curious about it as well. However, I don't expect any answer to this question, because these tasks' GPU utilization is low even on PCIe3.0 systems.

http://www.gpugrid.net/forum_thread.php?id=3116&nowrap=true#26657
Nate's earler post shows a couple of good runtimes, maybe he can dig out the rig specs for us ... Nate?
____________
Thanks - Steve

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26768 - Posted: 2 Sep 2012 | 11:07:20 UTC - in response to Message 26765.
Last modified: 2 Sep 2012 | 11:09:49 UTC

Nate's earler post shows a couple of good runtimes, maybe he can dig out the rig specs for us ... Nate?

As Nate said, these workunits have a very high variation in their runtimes.
My first one is completed in 22h 19m.
2nd: 22h 2m.
3rd: 18h 31m.
4th: 14h 14m.
5th: 11h 39m.
6th: 12h 2m.
7th: 12h 25m.
8th: 12h 4m.

I've found a very good rig in the toplist:
The shortest runtime for a PAOLA_3EKO_8LIGANDS on this host is 7h 34m.
This is a Linux system with a Core i7-2600K overclocked to 4.6GHz (according to my estimation) with two GTX 680s (I bet these are overclocked too).
But even on this system the PAOLA_3EKO_8LIGANDS use less CPU time than GPU time (unlike all other workunits), so I guess even this system could have shorter runtimes if the "SWAN_SYNC=0" setting would have been applied to these workunits.

Profile The King's Own
Avatar
Send message
Joined: 25 Apr 12
Posts: 32
Credit: 945,543,997
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26769 - Posted: 2 Sep 2012 | 12:03:00 UTC

Runtime on new 660Ti, power target set to 105% (One sample):

1. 20 hrs 17 min

Same rig (Core i5-750, 8 gByte RAM) when run on 580 GTX, no overclock (Three samples):

i. 22 hrs 30 min
ii. 27 hrs 39 min
iii. 28 hrs 03 min

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26770 - Posted: 2 Sep 2012 | 13:04:12 UTC

The problem with looking solely at runtimes is that the number you're seeing is only the time taken for that task to complete - it doesn't say how many tasks were running simultaneously. So anyone using a custom app_info.xml and running 2 or more tasks at once might be doubling his points per second and the runtimes would look the same.

Has anyone tried running multiple tasks at once? Can you please post your runtimes for running two of them and your runtimes when running just one at a time?

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26771 - Posted: 2 Sep 2012 | 13:07:01 UTC - in response to Message 26770.

Oh, and my runtime for Paola tasks is on average 13.33 hours on a stock Gigabyte GTX 670. The variation isn't that great in mine - about half an hour either way. But I haven't run GPUgrid for a few days so I wouldn't know if the newer tasks have different runtimes.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26773 - Posted: 2 Sep 2012 | 21:34:32 UTC

I noticed almost 2 weeks ago when this all started that others were aborting these tasks or getting a lot of computational errors, that means the rest of us have to pickup the slack for those who refuse to do the work.

I'm looking at this problem much simpler than everyone else. I use a program called gpushark to monitor my cards (it's free) and it shows the VRAM memory controller is getting very low utilization (9% - 11% as compared to 31% - 39%), I think that's the choke point, it will slow down the CPU.

____________

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26775 - Posted: 3 Sep 2012 | 5:40:47 UTC

Do you get 9x% gpu load while this memory controller load? When not, it is normal that the memory controller has lesser to do perhaps when the gpu load is less too. Only as suggest ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26776 - Posted: 3 Sep 2012 | 12:17:04 UTC - in response to Message 26773.

I noticed almost 2 weeks ago when this all started that others were aborting these tasks or getting a lot of computational errors, that means the rest of us have to pickup the slack for those who refuse to do the work.

I'm looking at this problem much simpler than everyone else. I use a program called gpushark to monitor my cards (it's free) and it shows the VRAM memory controller is getting very low utilization (9% - 11% as compared to 31% - 39%), I think that's the choke point, it will slow down the CPU.


GPU-Z is better in my opinion (it's also free), because it displays those readings and plots graphs of them in real time as well. And you can save the data to a log file.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26777 - Posted: 3 Sep 2012 | 19:57:20 UTC - in response to Message 26776.

GPU-Z is better in my opinion (it's also free), because it displays those readings and plots graphs of them in real time as well. And you can save the data to a log file.


It is a very good program and I've been familiar with it for years but gpushark has a much smaller foot print and uses less resources and I leave it running 24/7 on all 4 of my computers and it gives real time info on up to 4 video cards at the same time when in advanced mode. I wasn't implying that all should use it (sorry for the misunderstanding), I was just letting folks know how I was monitoring my video cards.
____________

Profile robertmiles
Send message
Joined: 16 Apr 09
Posts: 503
Credit: 758,085,692
RAC: 313,138
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26783 - Posted: 4 Sep 2012 | 12:51:17 UTC

3EKO_19_4-PAOLA_3EKO_8LIGANDS-3-100-RND3778

has run 123 hours so far, 57 to go, 67.623% progress

already past deadline

http://www.gpugrid.net/result.php?resultid=5800806

Is this a reasonable run time on a GTX 560? Or is it so much
past what is expected that I should abort the workunit?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2346
Credit: 16,293,515,968
RAC: 6,990,482
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26784 - Posted: 4 Sep 2012 | 15:25:01 UTC - in response to Message 26783.
Last modified: 4 Sep 2012 | 15:28:21 UTC

3EKO_19_4-PAOLA_3EKO_8LIGANDS-3-100-RND3778

has run 123 hours so far, 57 to go, 67.623% progress

already past deadline

http://www.gpugrid.net/result.php?resultid=5800806

Is this a reasonable run time on a GTX 560? Or is it so much
past what is expected that I should abort the workunit?

You should abort it immediately.
It has been resent to another host already, which has a GTX 680, and will probably return the result much sooner than 57 hours.
This is not a reasonable run time at all.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26788 - Posted: 5 Sep 2012 | 7:42:51 UTC

I just got a new task I've never seen before, it's PAOLA_2UY5 and it's doing the exact same thing as the other PAOLA task. 30% to 50% GPU usage, is this the way of all new WU's to come? There are going to be lots of grumpy folks that have older cards. It's looking like close to 30 hours on my GTX560Ti, who the heck is writing these things?
____________

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26794 - Posted: 6 Sep 2012 | 11:15:16 UTC
Last modified: 6 Sep 2012 | 11:17:54 UTC

There are more liangs in the queue it seems i nearly only get this wus :/ dont want 36h to compute on one wu...thx to cuda31 i never git an error on these *cross his fingers* and still get 50% more the credits then in short queue!!! Wtf.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26797 - Posted: 6 Sep 2012 | 13:58:30 UTC
Last modified: 6 Sep 2012 | 13:58:49 UTC

Hello everyone, sorry I haven't posted in awhile.

But, I'm beginning to get pissed off about these tasks.

Whenever I have 1 running it will run fine although incredibly slowly 36%.

BUT, when I have 3 running on my 3x 680s acemd crashed or my computer locks up and must be restarted. It should not be my responsibility to constantly abort tasks and keep an eye on this rig at all times.

These tasks just caused me to lose another 20hrs of combined crunching. Further, when running 2 one time, the computer crashed, causing a Nathan task which had 20min left to fail.

This is unacceptable. I have never, ever aborted WUs before. Not even when the points disparity was so different many others complained. I let them run. But I've reached the tipping point, and don't know what to do anymore. Please, after this batch is finished, should this ever happen again, pull the tasks from the hopper and figure out what is wrong with them.

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26799 - Posted: 6 Sep 2012 | 16:00:43 UTC

For the last couple of posters - have you tried using my modified app_info.xml ? It won't make the tasks any faster (in fact it might slow things down slightly), but at least you'll be doing two at once so you'll be getting almost twice the points per unit time and so the performance hit isn't as bad.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26801 - Posted: 6 Sep 2012 | 16:47:14 UTC - in response to Message 26799.

have you tried using my modified app_info.xml
That's a tough prospect as we can't count on getting only PAOLA tasks and I believe it will be counter productive if a NATE WU gets doubled up.

That being said, it looks like I may have an oppportunity when I get home today but it depends on how ambitious I am because the 2 PAOLA's I have are on a 2 card rig so I'm going to pull a card to make this work as cleanly as possible. If I'm going that far I am also going to swap which slot the remaining card is in. If I can get this done and working correctly I will think about aborting the NATE's and run PAOLA exclusively.

side note: the card I'm pulling is a GTX480 and I'm thinking about decomissioning it, anyone interested can send me a PM.
____________
Thanks - Steve

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26806 - Posted: 6 Sep 2012 | 18:27:25 UTC - in response to Message 26801.

That's a tough prospect as we can't count on getting only PAOLA tasks and I believe it will be counter productive if a NATE WU gets doubled up.


It would be counterproductive to run two nathans, but you can work around that if you're willing to babysit your computer a bit (I know some people aren't).

If you have two Paola tasks, leave coproc count = 0.5.
If you get a nathan task, exit boinc, modify the coproc count to 1, then start up boinc again.

If you have a Paola and a Nathan... er, you're out of luck on one GPU. But as you have two GPUs and hence a four task limit on that host, you might be able to work something out.

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26808 - Posted: 6 Sep 2012 | 19:59:20 UTC

That's the issue though. I tend to get more Nathan's than Paola. Further complicating things is that with 3 cards in one rig. I have 6 tasks in total to watch.

This is not something we should be doing. I understood that when they switched to the new CUDA app, that there were some server issues with correctly sending tasks to rigs that were compatible and a workaround was used.

This is a different case. Please, GPUgrid, I love this project, and I am not leaving. Ever. But do not let this happen again. I understand you tested them on your own software before being sent. So, in the future could you please possibly run some under BOINC in house first to see if any problems arise.

I hope this batch is nearing completion.

Cheers.

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26819 - Posted: 8 Sep 2012 | 3:01:58 UTC - in response to Message 26801.
Last modified: 8 Sep 2012 | 3:03:08 UTC

... it looks like I may have an oppportunity when I get home today but it depends on how ambitious I am

The deed is done ... 2 at a time is taking 30 hours on a 660Ti.
Currently my 670 is going to take 16 hours to do 1.
Overall this is going to kill my RAC but I'm going to try to stick with it for a while, may even do it on my 670 just to clear the queue.

Anyone from the project have an estimate on how many more we will need to finish out this run?
____________
Thanks - Steve

voss749
Send message
Joined: 27 Mar 11
Posts: 26
Credit: 307,452,808
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26829 - Posted: 8 Sep 2012 | 12:52:55 UTC - in response to Message 26760.

Are I'm the only one who abort every "PAOLA_3EKO" task I get?

My GPU goes below 400MHz and the task time goes up to ~ 18H.
That is more than double the time it should take, and I could have done more than 2 "NATHAN_RPS1120801" tasks in that amount of time.


You really shouldn't do that. Remember that they rely on you and other volunteers to do the crunching for their research. If everyone aborted certain kinds of tasks, they'd never get any research done. If you're concerned about low utilisation, I suggest using a custom app_info.xml - I posted about it a few posts ago in tis thread.


Well then maybe they shouldnt send out these workunits. Perhaps If everyone aborted these tasks maybe they would get the message and fix the problems. We shouldnt have to hack our way around badly behaving workunits. We are donating resources to their project, we have a right to expect our donated resources to be used as efficiently and effectively as possible.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 26830 - Posted: 8 Sep 2012 | 13:28:46 UTC

voss has a good point there, though I don't advocate open rebellion. Why hasn't the project scientist responded to any of these threads? Ya know, something like "Were working on rectifying the situation" or letting us know why they haven't pulled them from the hopper? I'm starting to wonder if this might not be deliberate because their getting overwhelmed from the new video cards.
____________

5pot
Send message
Joined: 8 Mar 12
Posts: 411
Credit: 2,083,882,218
RAC: 0
Level
Phe
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26831 - Posted: 8 Sep 2012 | 14:41:39 UTC

ATTENTION GPUGRID STAFF:

My 3way 680 setup caught another 3 of these tasks in a row last night and crashed yet again. This is now the 4th time that this has happened.

I will be switching over to the Short Run Tasks. Which I do not want to do. But your current Long Runs give me NO CHOICE.

PLEASE LET US KNOW when these bad tasks are out of the hopper. This is unacceptable.

I enjoy crunching here, and as I've said before, I love this project. But, I expect better from you guys. And girls :).

PLEASE do not let this happen again.

Cheers

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26832 - Posted: 8 Sep 2012 | 15:33:42 UTC - in response to Message 26831.
Last modified: 9 Sep 2012 | 9:37:13 UTC

I'm running a long PAOLA_3EKO_8LIGANDS task on a now fairly old GTX470
(2003x64, i7-2600, 8GB, 2nd hdd). Normally I get 99% GPU utilization (or very close to it). For the 3EKO_8LIGANDS task I'm seeing 45% GPU utilization with 2 CPU threads free to support the GPU. When I suspend all CPU tasks the GPU utilization rises to 56%. HT is on, and I can see that the 7% CPU usage is almost half of one CPU thread.

Do you think Boinc could be forcing the task to only use the one thread?
Affinity is for all threads according to Task Manager. With Priority set to high the GPU utilization looks to be about 1% more (so not really significant).

Following a restart I configured the system to not use HT. I also ran the task with Boinc Manager closed. Even at High Priority the task still only ran at 58% GPU utilization. The memory controller load was only at 15%. GPU temp is still low (55°C). CPU usage is around 12% (half of one Core). Starting Boinc Manager didn't appear to make any difference.
As a side note, the CPU continually jumps back and fourth from ~1616MHz to 3737MHz. Running WUProp and FreeHal did not force the CPU to remain at 3737MHz, but running one Docking task forced the CPU to vary from 3636 to 3535MHz. GPU utilization and memory usage didn't change however.
With 2 of the 4 CPU cores used for CPU tasks the GPU utilization is ~56%.
With 3 CPU cores used for CPU tasks GPU utilization remained at 56%; suggesting that CPU projects are competing with these GPU tasks in some way (as 6threads resulted in 45% GPU utilization). This is somewhat similar to what you see at POEM and Donate - only run a few CPU tasks, and it makes little or no impact on the GPU project, but 5/8threads (or more) reduces GPU performance.
Could this issue be related to the memory controller?

- The task finished in around the same time as previous tasks but there is time variation between these tasks, so nothing further can be concluded.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile The King's Own
Avatar
Send message
Joined: 25 Apr 12
Posts: 32
Credit: 945,543,997
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 26867 - Posted: 11 Sep 2012 | 0:22:50 UTC

I feel guilty when I abort these work units; however, I can run 2 of these per day or 6 to eight others. If I do the latter my RAC doesn't plummet and my good will is not lessened.

I refer you to my "Not Happy" post. My girlfiend doesn't fully comprehend why I spend $100 a month on electricity. She doesn't know what 2 GTS450s, a GTX580 and a 660Ti cost, and I'm not telling her. Nevertheless, I bought 2 of those GPUs solely for this project. I live in the US and would at least get a tax deduction if GPUGrid were based here.

Respectfully,

The King's Own
____________

Luke Formosa
Send message
Joined: 11 Jul 12
Posts: 32
Credit: 33,298,777
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 26871 - Posted: 11 Sep 2012 | 1:01:45 UTC - in response to Message 26867.
Last modified: 11 Sep 2012 | 1:02:29 UTC

I feel guilty when I abort these work units; however, I can run 2 of these per day or 6 to eight others. If I do the latter my RAC doesn't plummet and my good will is not lessened.

I refer you to my "Not Happy" post. My girlfiend doesn't fully comprehend why I spend $100 a month on electricity. She doesn't know what 2 GTS450s, a GTX580 and a 660Ti cost, and I'm not telling her. Nevertheless, I bought 2 of those GPUs solely for this project. I live in the US and would at least get a tax deduction if GPUGrid were based here.

Respectfully,

The King's Own


Something's not right here. You shouldn't be spending $100 a month for an RAC of just 300,000. I live in Malta, where the electricity is at least twice as expensive, and I still manage a global RAC of 800,000 on €30 a month (running cost of the computer alone) with a single GTX 670 and an i7-3770K (Ivy bridge) if I leave it on 24/7. I think the problem is that you're using older generation cards. The Nvidia 6-series cards give about twice the performance per watt compared to the 5- or 4- series ones. So you should switch over completely to 6-series cards. Consider it an investment - within 6 months they'd have paid for themselves in electricity costs. Same argument used for switching incandescent light bulbs for energy saving ones :)

Ken Florian
Send message
Joined: 4 May 12
Posts: 56
Credit: 1,832,989,878
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26873 - Posted: 11 Sep 2012 | 1:52:57 UTC - in response to Message 26867.

This IS painful.

I spent about $3,600 to build a machine exclusively for gpugrid crunching. It does nothing else. Ever. The thing runs so hot that I can't run it in my home. It is in my son's basement a very long way from here. This means I never get to run, say, Flight Simulator or Civ5, on two video cards that a few years ago I could not have dreamed of being able to afford.

I hope they fix it soon.

Ken Florian

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26874 - Posted: 11 Sep 2012 | 6:05:02 UTC - in response to Message 26867.
Last modified: 11 Sep 2012 | 6:08:58 UTC

I think you should not feel guilty. It is YOUR hardware and you should choose how it is used. Aborting some work units is really no worse than if somebody decided to crunch on a different project for a while, or there was a power outage, or the server ran out of disk space, or the DNS servers got hacked and the internet didn't work, or.... (hopefully you see my point)

We are VOLUNTEERS. We are giving up our own time, money, and resources to the scientists. If we should decide that a certain project or task or workunit is unfit for our individual tastes, time commitment, or energy cost, we have that right to abort and try something different. That could mean choosing a different project alltogether, or just another workunit.

Paden Cast
Send message
Joined: 1 Jun 10
Posts: 1
Credit: 83,369,250
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 26914 - Posted: 16 Sep 2012 | 4:47:55 UTC
Last modified: 16 Sep 2012 | 5:27:22 UTC

I felt i should chime in since i am seeing nowhere near the times for this project that others are seeing. Granted, the WU's are all jacked up. I'm seeing a GPU load at 55%, Memory load 15%, at 45% power consumption.

My rig:

i5-3750 oc to 4.2 ghz
16 gb 1600MHZ oc to 2000
2x GTX 670 FTW LE (both on x16 rails)
SSD on USB 3.0

I'm running the my first unit of this now. Im going to guess around 12 hours.

I'm going to guess that we are being limited by the CPU core utilization. My GPU load matches my CPU load almost to a T.

Unless we get a change that lets us choose the CPU core utilization (as in 1:1), I think we are stuck with high run times.

Let me know if I can do anything on my end to test. It would be helpful to include how to do it. I just got win7 and am having a hell of a time finding things.

M_M
Send message
Joined: 11 Nov 10
Posts: 9
Credit: 53,476,066
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 26956 - Posted: 22 Sep 2012 | 5:40:24 UTC

Still getting these 3-PAOLA_3EKO_8LIGANDS loooooong runs... :(

GPU Usage only between 35 and 39% and taking over 13h to complete. Other long runs have GPU usage of 95-99% and taking 6-9hrs. Cuda42, 306.23 drivers, Win7x64, i7-2600k@4.5GHz.



Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26958 - Posted: 22 Sep 2012 | 8:26:57 UTC - in response to Message 26956.
Last modified: 22 Sep 2012 | 8:48:55 UTC

All you can do is increase GPU utilization by about 10% by running fewer CPU tasks (say 4 from 8 threads - see below). That would improve your task performance by around 28%. In terms of overall Boinc Credits it's worth it, but it depends on where your priorities are. Other GPU tasks don't require this, so I would only do it if I was getting lots of these tasks, or if I spotted one, and can change settings back later.
I guess you could write a script to poll for and identify the task being run and change Boinc settings accordingly, but that's a pile of work, and we might not see many of these.

What's the performance like on Linux?
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

M_M
Send message
Joined: 11 Nov 10
Posts: 9
Credit: 53,476,066
RAC: 0
Level
Thr
Scientific publications
watwatwatwat
Message 26961 - Posted: 22 Sep 2012 | 12:45:10 UTC - in response to Message 26958.

All you can do is increase GPU utilization by about 10% by running fewer CPU tasks (say 4 from 8 threads - see below). That would improve your task performance by around 28%.


Seems to be right. By running fewer other CPU tasks, GPU utilization for these long runs WU increase to around 48-50%. Does this mean that there is a CPU bottleneck in supplying a actual work to GPU for these particular WUs? I see those CPU tasks are single threded. Just wondering if my CPU is for example twice as fast in single threads, would GPU utilization improve?

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26965 - Posted: 22 Sep 2012 | 23:22:59 UTC - in response to Message 26961.
Last modified: 22 Sep 2012 | 23:23:11 UTC

yes that seems to be true. I tried underclocking one of my rigs from 2.67ghz to 1.6ghz and the gpu usage dropped from ~38% to ~25% iirc

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26973 - Posted: 23 Sep 2012 | 11:41:06 UTC - in response to Message 26965.

Clearly it's partially CPU dependant, but another bottleneck factor is at play too, otherwise if we stopped running CPU tasks altogether the GPU utilization would rise to 99% on XP systems.
The candidates are CPU Cache, system BUS, RAM freq./timings, HDD I/O, the app and Boinc.
If it's CPU cache then the high end 2nd Gen Intel's would allow you to go higher then ~50% GPU utilization.
3rd Generation Intel systems should allow higher GPU utilization if it's BUS related, DDR2 vs DDR3 would make a big impact if RAM is a factor (as would higher freq. RAM).
HDD I/O would improve with a good SSD (disk write caching might also make some difference).
The app and Boinc might behave differently on Linux.

Anyway, it's down to the researchers to improve, if they think it's worthwhile for the project. All we can do is optimize our systems for the app/tasks that are there to run, if we want to.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 26979 - Posted: 23 Sep 2012 | 19:50:28 UTC - in response to Message 26973.
Last modified: 23 Sep 2012 | 20:12:47 UTC

Anyway, it's down to the researchers to improve, if they think it's worthwhile for the project. All we can do is optimize our systems for the app/tasks that are there to run, if we want to.


agree to disagree, I think it is up to the researchers to optimize the tasks for the hardware that is available to them (the volunteers' systems), while there are and should be small things we can do to squeeze out that last 5-10%

since it seems that the majority of current users are having the same issue with only these tasks, there must be some major difference either in the actual work being done (could explain why it is much more CPU dependent) or a major difference in the coding that was either over-looked (accidental) or not able to be worked around (if the work being done does not benefit from parallelization for instance)
____________
XtremeSystems.org - #1 Team in GPUGrid

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27032 - Posted: 26 Sep 2012 | 22:12:45 UTC - in response to Message 26979.

I'm not sure we even disagree!

While most of us would prefer tasks to all run at 99% the research doesn't always fall into this apparently perfect model. Unfortunately that concept might even be false; just because the GPU is being used at 99% doesn't mean the the WU is best optimized. It might be the case that the code could be changed so that the task utilizes 99% of the GPU, but is slower overall than alternative code that only uses 66% (some things are just done faster on the CPU). Then there is the power consumption consideration, and the inevitable argument of value and chance; which piece of research is the most important? We won't know until a cure for Parkinson's, Altimeters or Cancer is actually derived. Even then, most research is built on other research, including research that went no-where...

Different research necessitates different code.

GPUGrid is involved in many research lines, which is fantastic for GPUGrid, GPU research and Science as a whole - the GPU is an established and developing tool for crunching and facilitates many techniques.
GPU crunching is key to the future of Scientific research, especially in such financially austere times.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27040 - Posted: 30 Sep 2012 | 23:07:55 UTC - in response to Message 26973.
Last modified: 1 Oct 2012 | 17:37:10 UTC

Clearly it's partially CPU dependant, but another bottleneck factor is at play too, otherwise if we stopped running CPU tasks altogether the GPU utilization would rise to 99% on XP systems.
The candidates are CPU Cache, system BUS, RAM freq./timings, HDD I/O, the app and Boinc.
If it's CPU cache then the high end 2nd Gen Intel's would allow you to go higher then ~50% GPU utilization.
3rd Generation Intel systems should allow higher GPU utilization if it's BUS related, DDR2 vs DDR3 would make a big impact if RAM is a factor (as would higher freq. RAM).
HDD I/O would improve with a good SSD (disk write caching might also make some difference).
The app and Boinc might behave differently on Linux.

Anyway, it's down to the researchers to improve, if they think it's worthwhile for the project. All we can do is optimize our systems for the app/tasks that are there to run, if we want to.


Can't add much, but when I put my GPU into a system with a lesser CPU (IC2D 2.13GHz rather than i7-2600), the GPU Utilization dropped to 37% (when not crunching with the CPU). Both systems were DDR3 dual channel, and I used an SSD with the IC2D, to eliminate any possible I/O bottlenecks. I noted that the task was >600MB in size.

The task returned in 22h for full bonus credit, but took twice as long as some tasks for the same credit.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27095 - Posted: 19 Oct 2012 | 16:21:56 UTC - in response to Message 27040.

Name 3EKO_15_2-PAOLA_3EKO_8LIGANDS-23-100-RND9894_2
Workunit 3734669 (says, errors WU cancelled)
Created 17 Oct 2012 | 0:23:45 UTC
Sent 17 Oct 2012 | 3:56:40 UTC
Received 17 Oct 2012 | 12:07:48 UTC
Server state Over
Outcome Computation error
Client state Compute error
Exit status 98 (0x62)
Computer ID 135026
Report deadline 22 Oct 2012 | 3:56:40 UTC
Run time 28,330.88 :(an 8h hairball)
CPU time 18,017.27
Validate state Invalid
Credit 0.00
Application version Long runs (8-12 hours on fastest card) v6.16 (cuda42)

Stderr output

<core_client_version>7.0.28</core_client_version>
<![CDATA[
<message>
- exit code 98 (0x62)
</message>
<stderr_txt>
MDIO: cannot open file "restart.coor"
ERROR: file tclutil.cpp line 31: get_Dvec() element 0 (b)
called boinc_finish

</stderr_txt>
]]>


Just saying,
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 27100 - Posted: 19 Oct 2012 | 21:20:48 UTC - in response to Message 27095.

These are now stopped.
We have found the source of the problem in some scripting called inside the input files. It was quite unexpected.

These functions will now be embedded into the applications for speed.

gdf

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 27106 - Posted: 21 Oct 2012 | 2:34:42 UTC - in response to Message 27100.

These are now stopped.
We have found the source of the problem in some scripting called inside the input files. It was quite unexpected.

These functions will now be embedded into the applications for speed.

gdf


Ya, woohoooo, way to go, whooot whooot, hallelujah and praise be. Oh, sorry, got a little carried away. Seriously though, that's good news.
____________

Post to thread

Message boards : Number crunching : PAOLA_3EKO_8LIGANDS very low GPU load

//