Advanced search

Message boards : Number crunching : Elapsed Time vs CPU time

Author Message
Hypernova
Send message
Joined: 16 Nov 10
Posts: 22
Credit: 24,712,746
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19639 - Posted: 23 Nov 2010 | 10:04:12 UTC
Last modified: 23 Nov 2010 | 10:04:38 UTC

Why is there such a big difference between
This is an example of a task on the Jupiter device that runs a GTX 285. Drivers are 260.99

Elapsed Time: 27,489.40 seconds
CPU Time: 4,320.35 seconds

To me this sounds as a very low efficiency of 15.7%
The best case I have is around 30% but only once. Ther are also worse cases.

Is it normal?

Profile Saenger
Avatar
Send message
Joined: 20 Jul 08
Posts: 134
Credit: 23,657,183
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19642 - Posted: 23 Nov 2010 | 12:09:36 UTC - in response to Message 19639.

Elapsed Time: 27,489.40 seconds
CPU Time: 4,320.35 seconds

Is it normal?

Yes.
The app mainly runs on the GPU, it only needs a few cycles on the CPU, and only they are counted in that measure.
____________
Gruesse vom Saenger

For questions about Boinc look in the BOINC-Wiki

Hypernova
Send message
Joined: 16 Nov 10
Posts: 22
Credit: 24,712,746
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19658 - Posted: 24 Nov 2010 | 6:23:16 UTC - in response to Message 19642.

Elapsed Time: 27,489.40 seconds
CPU Time: 4,320.35 seconds

Is it normal?

Yes.
The app mainly runs on the GPU, it only needs a few cycles on the CPU, and only they are counted in that measure.


So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high. In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour).
These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else.

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,239,065,968
RAC: 3,161,193
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19659 - Posted: 24 Nov 2010 | 7:56:36 UTC - in response to Message 19658.

So this measure is the ratio CPU/GPU. In that case I would say that the CPU usage remains still very high.

The GF110 is practically a 'primitive CPU' containing 16 cores with 32 threads in each core (16*32=512 CUDA cores in nVidia terminology). I think supporting 512 GPU cores with even a whole CPU core to achieve maximum performance is a rewarding sacrifice.

In a perfect world the whole WU is downloaded on the GPU (If there is enough local memory), then until completion would run on the board with no CPU interference and when finished is uploaded back to the CPU and a new WU downloaded. This should necessitate some seconds of CPU but not more than one hour CPU time (3600 seconds is one hour).

While in the real world the GPU is still a coprocessor (actually, a lot of it as I mentioned above), that's why the GPU cannot do everyting on it's own, would it be calculating a 2D projection of a 3D (game)scene, or doing some 3D scientific calculation for GPUGRID.
Loading the data and the code to the GPU takes only a few seconds, just like unloading it, so it would't be an hour.

These values of CPU usage mean that the CPU is doing real work as the GPU crunches. Maybe there is data exchanged and written back to the HDD as the crunching goes on or something else.

That's correct. As far as I know, some double precision calculation is needed for crunching these WUs, and this is done by the CPU (because the GTX's DP is slowed down by nVidia)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 19661 - Posted: 24 Nov 2010 | 11:40:21 UTC - in response to Message 19659.
Last modified: 24 Nov 2010 | 11:41:49 UTC

CPU time is the time used on one CPU core/thread, so if you have an 8 thread CPU the actual time spent using the entire CPU would need to be divided by 8. If that GPU is in an 8core system then that works out at 9min of entire CPU time per 7.6h of entire GPU time. There is no point looking at the GPU as a unit and the CPU as separate cores; it would be no better than saying each of 240 GPU CUDA cores uses 2.25sec of CPU time.

Hypernova
Send message
Joined: 16 Nov 10
Posts: 22
Credit: 24,712,746
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 19663 - Posted: 24 Nov 2010 | 17:30:44 UTC - in response to Message 19661.

Thanks for your replies. All clear now. Hope the GTX580 who is double precision will do better, but I agree that the CPU contribution on a 12 thread CPU remains minimal.

Post to thread

Message boards : Number crunching : Elapsed Time vs CPU time

//