Message boards : Wish list : More preferences
Author | Message |
---|---|
I would like to have more preferences to select what I want on specific systems. We have now 4, but with beta apps I can think of at least 5 preferences I would use. But perhaps this is a BOINC setting and not at project level? | |
ID: 32223 | Rating: 0 | rate: / Reply Quote | |
Can you explain better? | |
ID: 32234 | Rating: 0 | rate: / Reply Quote | |
I suspect you mean the different work queues? | |
ID: 32239 | Rating: 0 | rate: / Reply Quote | |
seems like he means Default, Home, Work, School. And he wants more. | |
ID: 32242 | Rating: 0 | rate: / Reply Quote | |
seems like he means Default, Home, Work, School. And he wants more. Yes indeed, this is what I would like. So I can make combinations with LR and beta, SR and beta, beta only, LR only, All applications, SR only and LR and SR. This seems a bit over the top, but with more then 4 rigs and several type of GPU´s that are better for one type of WU than the other, would be handy. But it is especially useful when beta´s are here to run. ____________ Greetings from TJ | |
ID: 32256 | Rating: 0 | rate: / Reply Quote | |
Sorry, that's a global BOINC setting. That's why you see the same 4 profiles on all projects, across various server versions. This is needed because the general settings for these profiles are synchronized over projects. This could be extended, but would require touching the BOINC core itself.. which I don't think D.A. would be fond of. The argument might be like this: "4 profiles have to be enough. If you think you need more, you're probably trying to micro-manage some super-special case. We don't want to bloat BOINC too much in oder to please all users with super-special cases and thereby confuse newbies." | |
ID: 32259 | Rating: 0 | rate: / Reply Quote | |
Aha, that´s what I thought, thanks for the explanation. | |
ID: 32266 | Rating: 0 | rate: / Reply Quote | |
As MrS pointed out elsewhere, what we really need is to be able to allocate our resources in a completely different way - per computational component/device. ie set GPU1 to do short runs, set GPU2 to do long runs, GPU3 can do beta and long, iGPU on a different project only... The same applies to the CPU. As is, its far too complex to configure a system to crunch 4 Climate models, 1 Rosetta and 1 WCG WU. It's been demonstrated that a good mixed balance of CPU task types is more productive (typically 10 to 15%), and yet when we update now we get blocks of work from one project queue at a time. Of course only Berkeley could implement this. | |
ID: 32288 | Rating: 0 | rate: / Reply Quote | |
...It's been demonstrated that a good mixed balance of CPU task types is more productive (typically 10 to 15%), and yet when we update now we get blocks of work from one project queue at a time. Of course only Berkeley could implement this. Yes, when BOINC starts asking projects for work, it asks the projects in order of priority, to fully fill the (min + max-additional) queue, all to minimize the RPCs to the projects (SETI especially was having network problems due to unnecessary RPCs). This leads to work fetch eventually getting lots of work from a single project. The Task Scheduler, however, will try to run tasks from different projects. So, if you happen to have work in your local queue from multiple projects, then even if you got tons of work from 1 project recently, you'll likely see work running from several different projects. A larger min-buffer will help to add more variety to your running task list, yet you'll still want a small max-additional-buffer to make sure your GPUGrid tasks get bonus credits. I have my buffer settings set at min-buffer: 0.05 days max-additional-buffer: 0.15 days Those settings appear (to me) to be the best compromise, since a) the min-buffer ensures that I see quite a bit of variety on my 8-CPU computer, b) the max-additional-buffer ensures GPUGrid bonus credits, and c) the difference between them is large enough to minimize RPCs to the projects (making, ideally, 10 requests per day for CPU work, since 0.15-0.05 = 0.10 days of work requested). Regards, Jacob | |
ID: 32588 | Rating: 0 | rate: / Reply Quote | |
I agree with your suggested settings. They are very close to what I generally use and what I recommend for here. | |
ID: 32631 | Rating: 0 | rate: / Reply Quote | |
I agree with your suggested settings. They are very close to what I generally use and what I recommend for here. Alright, I'm not a BOINC dev, but I am an alpha tester, so I might have some insight... let's see. When I increase the work cache (beta testing, short runs, for other projects) Boinc tends to predominantly run one CPU project, then another, and doesn't mix and match as well. In fact it does annoying things such as suspend GPU projects to run CPU work and then starts running new GPU tasks or the same tasks on different cards; it's also the case that some CPU projects don't have accurate runtime estimates, &/or use short deadlines, which throws Boinc out. You can easily end up with several days of work even with with a short cache, or you can run out of work (don't download new work before existing work finishes). It's probably the case that many researchers take shortcuts when releasing tasks at different projects, but there is a reason for that... I think you're implying that the BOINC server code is very difficult/hard to implement. I get the impression that is true, too. And there are also updates to the server code, that many project admins completely ignore. I would suggest that a project that implements a BOINC server must also have the "IT know-how and resources" to maintain it. Too often, that's not the case. I can't fault BOINC for that. When panic sets in it's often the case that only one CPU project runs. Then there are projects such as TheLaticeProject which starts off using a few hundred MB of RAM but mid-run start using 7GB RAM - everything else has to wait only to resume in bonkers mode. I see that too. MindModeling has decided that they're tasks are going to have only 2 days deadlines, despite my recommendations against that idea. If I wanted to keep cache settings of something like 2 days min buf, 5 days additional... then any MindModeling tasks would run high-priority, and GPUs could be left idle. The only "mechanism" that I've been able to come up with to combat that is to use an app_config.xml file for MindModeling, to limit max_concurrent, which I've done. From BOINC's perspective, a project may necessarily have short deadlines, and so it is working correctly to give tasks high priority in cases where deadlines would otherwise be missed, even if it means suspending GPU work to get it done. Regarding the Lattice Project, that's a bit of a different story. If a task is going to use a ton of RAM, then that resource "has been allocated", and other tasks have to wait. That's just the nature of it - BOINC cannot change that, and is working correctly. I think Boinc needs to be able to handle different configurations a lot better. The project and system requirements for Climate, GPUGrid and fightmalaria are very different, but plenty of users crunch many different and requirement-diverse projects at the same time. You might be interested in reading the following pages, which describe what BOINC does in terms of work fetch scheduling, and task scheduling. http://boinc.berkeley.edu/trac/wiki/ClientSched http://boinc.berkeley.edu/trac/wiki/ClientSchedOctTen Essentially, BOINC changes were recently made to honor resource share better for "differing configurations"; but it still respects "meeting deadlines first", which makes sense. My main system has 8 logical CPU cores, two NVidia GPU's an ATI, an Intel iGPU and for some projects I run more than one WU on the same GPU at the same time. Drool... :) Makes my 8-logical 3 nVidia GPU system look obsolete! :) It allows me to contribute to a diverse range of projects. In many ways it's amazing that Boinc can actually accommodate setups like the one I have, given it's origins, but such setups are becoming more common and can already be even more complex; some CPU's have 12threads, some workstations have 2 CPUs and servers can have 4 or 8 CPU's. Some systems also have 4 to 7 GPU's, and the future will bring Maxwell GPU's with integrated ARM processors. Well, I did spot something recently that sort of blew my mind. Did you know you can use BOINCStats (and presumably any other BOINC Account Manager) to set per-device-resource-shares? I've tested it, and it works! I looked at the website a bit closer, and it looks like you can even set up your own sets of custom preferences (in ADDITION to Global/Home/School/Work... you could have as many as you want, I think!). (This may be the answer that this thread was looking for.) Though, it seems the original poster wanted more-than-3 custom settings of project settings (including work queues), which an Account Manager would not be able to do. Regarding how BOINC does settings, you have to remember that any computer-specific configuration (such as <exclude_gpu> and app_config.xml) needs to be on the machine, and not on the web. I think BOINC has done a good job to make that distinction, but I do wish the advanced configurations were all available via GUI. But anyway, check into how BOINCStats does per-device resource shares, and custom work preferences, you might like it. You can even tell a host "No new tasks" or "Suspend", via their website! It's neat, and I didn't know about it until recently. ...It's just a fact that new projects mean new scientists and existing projects replenish their researchers on a regular basis. Boinc has to be developed in a way that assumes the researchers don't know everything and that busy scientists are not always able to keep up with changes. The friendliness of Boinc might be simplified by changing to a unit orientated system with different work caches and configurations for different hardware units, or groups of hardware. This would also be more granular and thus more accommodating. BOINC wants to honor resource share, regardless of what apps a project has, and regardless of what hardware you have available to support the project. It does it's absolute best to honor resource share. But when time crunch issues happen, or resource availability (RAM, CPU) becomes limited, the task scheduler does on okay job of "getting through it" in my opinion. If you believe you have a scenario where BOINC should handle it better, David Anderson is usually responsive to such suggestions on the BOINC Alpha list. If you have an idea to better handle a given scenario, put it out there, and see what happens! (For instance, I recently suggested adding an optional element <plan_class> to app_config.xml, since I'd like to limit the #CPUs allocated during a multi-thread MilkyWay task, but MilkyWay frustratingly decided to set it up such that the mt app has the same name as the non-mt app!) I know I'm defending BOINC quite a bit here, and I hope that doesn't make you upset. If you've got an idea to make BOINC better, bring it to the BOINC Alpha list - BOINC devs are listening! | |
ID: 32632 | Rating: 0 | rate: / Reply Quote | |
Another option for TJ (the original poster), and any others wanting to specify "don't use this app on this GPU", is to implement <exclude_gpu> within their cc_config.xml file, being sure to specify the optional <app> parameter. <cc_config> <log_flags> <!-- The 3 flags that are on by default are: file_xfer, sched_ops, task --> <file_xfer>1</file_xfer> <file_xfer_debug>0</file_xfer_debug> <sched_ops>1</sched_ops> <sched_op_debug>0</sched_op_debug> <task>1</task> <task_debug>0</task_debug> <unparsed_xml>1</unparsed_xml> <work_fetch_debug>1</work_fetch_debug> <rr_simulation>0</rr_simulation> <rrsim_detail>0</rrsim_detail> <cpu_sched>0</cpu_sched> <cpu_sched_debug>0</cpu_sched_debug> <cpu_sched_status>0</cpu_sched_status> <coproc_debug>1</coproc_debug> <mem_usage_debug>0</mem_usage_debug> <checkpoint_debug>1</checkpoint_debug> <http_debug>0</http_debug> <http_xfer_debug>0</http_xfer_debug> <network_status_debug>0</network_status_debug> <scrsave_debug>1</scrsave_debug> <notice_debug>0</notice_debug> <app_msg_receive>0</app_msg_receive> <app_msg_send>0</app_msg_send> <async_file_debug>0</async_file_debug> <benchmark_debug>0</benchmark_debug> <dcf_debug>0</dcf_debug> <disk_usage_debug>0</disk_usage_debug> <priority_debug>0</priority_debug> <gui_rpc_debug>0</gui_rpc_debug> <heartbeat_debug>0</heartbeat_debug> <poll_debug>0</poll_debug> <proxy_debug>0</proxy_debug> <slot_debug>0</slot_debug> <state_debug>0</state_debug> <statefile_debug>0</statefile_debug> <suspend_debug>0</suspend_debug> <time_debug>0</time_debug> <trickle_debug>0</trickle_debug> </log_flags> <options> <!-- =================================================== TESTING OPTIONS =================================================== --> <!-- <start_delay>20</start_delay> <ncpus>12</ncpus> <exclusive_app>NotepadTest01.exe</exclusive_app> <exclusive_gpu_app>NotepadTest02.exe</exclusive_gpu_app> --> <!-- =================================================== REGULAR OPTIONS =================================================== --> <report_results_immediately>0</report_results_immediately> <fetch_on_update>0</fetch_on_update> <max_event_log_lines>0</max_event_log_lines> <max_file_xfers>10</max_file_xfers> <max_file_xfers_per_project>4</max_file_xfers_per_project> <exclusive_app>iRacingSim.exe</exclusive_app> <exclusive_app>iRacingSim64.exe</exclusive_app> <exclusive_app>Aces.exe</exclusive_app> <exclusive_app>TmForever.exe</exclusive_app> <exclusive_app>TmForeverLauncher.exe</exclusive_app> <!-- ===================================================== SETUP GPUS ====================================================== --> <use_all_gpus>1</use_all_gpus> <!-- =========================================== SETUP GPU 0: GeForce GTX 660 Ti =========================================== --> <!-- <ignore_nvidia_dev>0</ignore_nvidia_dev> --> <!-- Exclude World Community Grid's "Help Conquer Cancer" GPU app (hcc1) on main display - makes graphics slow, even on 660 Ti --> <!-- Commenting out, for now, since this round of hcc1 is completed, and next round may not exhibit the issue. --> <!-- <exclude_gpu> <url>http://www.worldcommunitygrid.org</url> <device_num>0</device_num> <app>hcc1</app> </exclude_gpu> --> <!-- Exclude Einstein/Albert, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://einstein.phys.uwm.edu/</url> <device_num>0</device_num> </exclude_gpu> <exclude_gpu> <url>http://albert.phys.uwm.edu/</url> <device_num>0</device_num> </exclude_gpu> <!-- Exclude SETI/Beta, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://setiathome.berkeley.edu/</url> <device_num>0</device_num> </exclude_gpu> <exclude_gpu> <url>http://setiweb.ssl.berkeley.edu/beta/</url> <device_num>0</device_num> </exclude_gpu> <!-- Exclude Milkyway@Home, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://milkyway.cs.rpi.edu/milkyway/</url> <device_num>0</device_num> </exclude_gpu> <!-- =========================================== SETUP GPU 1: GeForce GTX 460 =========================================== --> <!-- <ignore_nvidia_dev>1</ignore_nvidia_dev> --> <!-- Exclude POEM's "POEM++ OpenCL version" GPU app (poemcl) from a second heterogeneous GPU, since it does not work properly --> <!-- Note: Although 320.18 drivers successfully run smalltest_3, the drivers still do not work right with POEM. --> <!-- Note: Also, it appears that running POEM only on the GTX 460, does not work. So, it must run on the GTX 660 Ti! --> <exclude_gpu> <url>http://boinc.fzk.de/poem/</url> <device_num>1</device_num> <app>poemcl</app> </exclude_gpu> <!-- Reminder: For GPUGrid.net, if going to run 2-tasks-on-1-GPU, exclude this GPU (it only has 1 GB memory) --> <!-- <exclude_gpu> <url>http://www.gpugrid.net</url> <device_num>1</device_num> </exclude_gpu> --> <!-- Exclude Einstein/Albert, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://einstein.phys.uwm.edu/</url> <device_num>1</device_num> </exclude_gpu> <exclude_gpu> <url>http://albert.phys.uwm.edu/</url> <device_num>1</device_num> </exclude_gpu> <!-- Exclude SETI/Beta, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://setiathome.berkeley.edu/</url> <device_num>1</device_num> </exclude_gpu> <exclude_gpu> <url>http://setiweb.ssl.berkeley.edu/beta/</url> <device_num>1</device_num> </exclude_gpu> <!-- Exclude Milkyway@Home, since work from other GPU projects should give enough work to keep this GPU busy. --> <exclude_gpu> <url>http://milkyway.cs.rpi.edu/milkyway/</url> <device_num>1</device_num> </exclude_gpu> <!-- =========================================== SETUP GPU 2: GeForce GTS 240 =========================================== --> <!-- <ignore_nvidia_dev>2</ignore_nvidia_dev> --> <!-- Exclude World Community Grid's Help Conquer Cancer GPU app --> <!-- GPU not supported per https://secure.worldcommunitygrid.org/help/viewTopic.do?shortName=GPU#610 --> <exclude_gpu> <url>http://www.worldcommunitygrid.org</url> <device_num>2</device_num> <app>hcc1</app> </exclude_gpu> <!-- Exclude POEM's "POEM++ OpenCL version" GPU app (poemcl) from a second heterogeneous GPU, since it does not work properly --> <!-- Also, GPU is not supported, as all tasks immediately error out --> <exclude_gpu> <url>http://boinc.fzk.de/poem/</url> <device_num>2</device_num> <app>poemcl</app> </exclude_gpu> <!-- Exclude GPUGrid.net --> <!-- GPU not supported per http://www.gpugrid.net/forum_thread.php?id=2507 --> <exclude_gpu> <url>http://www.gpugrid.net/</url> <device_num>2</device_num> </exclude_gpu> <!-- Exclude Milkyway@Home --> <!-- GPU not supported, as all tasks immediately error out --> <exclude_gpu> <url>http://milkyway.cs.rpi.edu/milkyway/</url> <device_num>2</device_num> </exclude_gpu> </options> </cc_config> | |
ID: 32633 | Rating: 0 | rate: / Reply Quote | |
Well, I did spot something recently that sort of blew my mind. Did you know you can use BOINCStats (and presumably any other BOINC Account Manager) to set per-device-resource-shares? I've tested it, and it works! Hello Jacob, I have now time to experiment with this but I don´t know how. I clicked on everything that is click-able in BOINCStats but I can nowhere find any setting that would act like default/school/home/work. I am a screwdriver guy not a software or programmer type. Can you help a little to get started. Don´t spent to much time for it as I know you are as busy as I am. Thanks. ____________ Greetings from TJ | |
ID: 32752 | Rating: 0 | rate: / Reply Quote | |
First of all, to get setup, you have to: | |
ID: 32754 | Rating: 0 | rate: / Reply Quote | |
Thanks for the thoroughly explanation Jacob. | |
ID: 32756 | Rating: 0 | rate: / Reply Quote | |
That's the thing -- I don't think projects support "setting project-specific settings in a custom setting profile". "GPUGrid preferences" only knows about Global, Home, School, Work. It has no way to know about "Test". | |
ID: 32759 | Rating: 0 | rate: / Reply Quote | |
I see that I must manually add my projects, but that is not working, I got the message: "Email address not unique" and "Incorrect password". | |
ID: 32761 | Rating: 0 | rate: / Reply Quote | |
The intent of an Account Manager is to, I believe, manage the BOINC client-project attachments, all via a single web interface. So yes, you hook up the Account Manager to the projects, then you hook up your client to the Account Manager, and then BOINC can auto-attach projects, auto-detach projects, and set things like No New Tasks and Suspend, all via the Account Manager interface. | |
ID: 32763 | Rating: 0 | rate: / Reply Quote | |
You are right Jacob. Indeed I can make as many venue settings as I like, but they will not visible and thus selectable in any project. | |
ID: 32770 | Rating: 0 | rate: / Reply Quote | |
Message boards : Wish list : More preferences