Closed P5-133XL closed 10 years ago
It uses about 6.5 of my 8. Gromacs is not capable of using all of the cores fully. I know the desktop client does but I believe Gromacs may be wasting some cycles busy looping which I've eliminated in the NaCl Gromacs core.
It also depends on what else is going on in your system and how many of your CPU cores are actually just hyper threads.
Can you provide more evidence such as the OS, CPU type and number of cores? Is your system idle otherwise?
I actually am running it on two machines both are a quad-core without HT. Two cores are being used by Nvidia GPU's and 2 cores for NaCl, which is really convenient here. However, on both machines, I have stopped the GPU slots leaving all 4 cores available. I then refreshed the NaCl client which forced a new DL and it still would not go beyond 2-cores without creating a new NaCl instance.
If refreshing the browser doesn't do it, how can I go beyond two cores for NaCl?
P.S. Yes, the machines are basically idle when folding.
My configs are:
Machine #1: Q9450@stock (4 cores, no HT), 4GB ram, 2x GTX 580@800Ghz Win2008 Server x64, Nvidia 314.07, V7.4.2, Chrome 33.0.1750.70 beta-m
Machine #2: Q6600@stock (4 cores, no HT), 3GB ram, GTX 580@800Ghz + GTX 480@800 Win8.1 x64, Nvidia 327.23, V7.4.2, Chrome 33.0.1750.70 beta-m
Please read my comment here about the CPU Usage for NaCl64 (https://github.com/FoldingAtHome/fah-nacl-client/issues/13#issuecomment-35065785).
Please open your browsers debug console by hitting F12 then clicking on Console. Then reload the page. After it's started running the WU again copy and paste the log here.
DEBUG: Config: user = P5_133XL main.js:73 DEBUG: Config: team = 10047 main.js:73 DEBUG: Config: passkey = **** main.js:73 DEBUG: Config: power = full main.js:73 DEBUG: NaCl module loading main.js:73 DEBUG: stats: {"team_rank":115,"earned":124942413,"url":"http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpage&username=P5_133XL","contributed":124941326,"team_url":"http://www.storageforum.net","team_urllogo":"http://www.storageforum.net/forum/images/misc/storageforum_logo.png","team_name":"StorageForum_net","team_total":452341969} main.js:73 DEBUG: load progress: 0.0% (0 of 18000000 bytes) main.js:73 DEBUG: load progress: 0.0% (0 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (853901 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (1686363 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (2946525 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (3738442 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (4802794 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (6016695 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (7261751 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (7888375 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (8985940 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (10078515 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (10654113 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11466379 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (12629212 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (13722277 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (14254080 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (15460386 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (16343792 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 100.0% (17125192 of 17125192 bytes) main.js:73 DEBUG: NaCl module loaded main.js:73 DEBUG: NaCl module responded main.js:73 DEBUG: WS: {"client_id":"0x4d7c7df053046bf5","threads":4,"version":"8.0.0","type":"NACL","os":"NACL","user":"P5_133XL","team":"10047","passkey":"redacted","ts":"2014-02-20T11:53:29Z","ws":"143.89.28.86","project":2981} main.js:73 Resource interpreted as Image but transferred with MIME type text/plain: "data:;base64, iVBORw0KGgoAAAANSUhEUgAAASwAAAClCAYAAADmtcDRAAAKPWlDQ1BpY2MAA…
// I deleted the MIME converted to text.
DEBUG: WU: {"client_id":"0x4d7c7df053046bf5","threads":4,"version":"8.0.0","type":"NACL","os":"NACL","user":"P5_133XL","team":"10047","passkey":"redacted","ts":"2014-02-20T11:53:29Z","ws":"143.89.28.86","project":2981,"server_version":703,"core":176,"core_version":227,"unit_id":"0x000000210893a18a52e6da82c76838cf","run":0,"clone":24,"gen":11,"wu_ts":"2014-02-20T11:53:29Z","deadline":"2014-02-21T11:53:29Z","timeout":"2014-02-20T14:17:29Z","credit":30,"compression":"bzip2","checksum":"yqa1XXY3F7jPN8wVf7CFr5GjCQmEXlYvOkcU7wk+Wf4="} main.js:73 DEBUG: core: checksum verified main.js:73 DEBUG: core: unpacking: frame11.tpr main.js:73 DEBUG: :-) G R O M A C S (-: main.js:73 DEBUG: Groningen Machine for Chemical Simulation main.js:73 DEBUG: :-) VERSION 4.6.5 (-: main.js:73 DEBUG: Contributions from Mark Abraham, Emile Apol, Rossen Apostolov, Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans, Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff, Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz, Michael Shirts, Alfons Sijbers, Peter Tieleman, main.js:73 DEBUG: Berk Hess, David van der Spoel, and Erik Lindahl. main.js:73 DEBUG: Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2012,2013, The GROMACS development team at Uppsala University & The Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. main.js:73 DEBUG: main.js:73 DEBUG: :-) Gromacs (-: main.js:73 DEBUG: Option------------------------------------------------------------ main.js:73 DEBUG: -s frame11.tpr Input Run input file: tpr tpb tpa -o traj.trr Output Full precision trajectory: trr trj cpt -x traj.xtc Output, Opt. Compressed trajectory (portable xdr format)-cpi state.cpt Input, Opt. Checkpoint file-cpo state.cpt Output, Opt. Checkpoint file -c confout.gro Output Structure file: gro g96 pdb etc. -e ener.edr Output Energy file -g md.log Output Log file-dhdl dhdl.xvg Output, Opt. xvgr/xmgr file-field field.xvg Output, Opt. xvgr/xmgr file-table table.xvg Input, Opt. xvgr/xmgr file-tabletf tabletf.xvg Input, Opt. xvgr/xmgr file-tablep tablep.xvg Input, Opt. xvgr/xmgr file-tableb table.xvg Input, Opt. xvgr/xmgr file-rerun rerun.xtc Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt-tpi tpi.xvg Output, Opt. xvgr/xmgr file-tpid tpidist.xvg Output, Opt. xvgr/xmgr file -ei sam.edi Input, Opt. ED sampling input -eo edsam.xvg Output, Opt. xvgr/xmgr file -j wham.gct Input, Opt. General coupling stuff -jo bam.gct Output, Opt. General coupling stuff-ffout gct.xvg Output, Opt. xvgr/xmgr file-devout deviatie.xvg Output, Opt. xvgr/xmgr file-runav runaver.xvg Output, Opt. xvgr/xmgr file -px pullx.xvg Output, Opt. xvgr/xmgr file -pf pullf.xvg Output, Opt. xvgr/xmgr file -ro rotation.xvg Output, Opt. xvgr/xmgr file -ra rotangles.log Output, Opt. Log file -rs rotslabs.log Output, Opt. Log file -rt rottorque.log Output, Opt. Log file-mtx nm.mtx Output, Opt. Hessian matrix -dn dipole.ndx Output, Opt. Index file-multidir rundir Input, Opt., Mult. Run directory-membed membed.dat Input, Opt. Generic data file -mp membed.top Input, Opt. Topology file -mn membed.ndx Input, Opt. Index file main.js:73 DEBUG: Option------------------------------------------------------ main.js:73 DEBUG: -[no]h bool no Print help info and quit main.js:73 DEBUG: -[no]version bool no Print version info and quit main.js:73 DEBUG: -nice int 0 Set the nicelevel main.js:73 DEBUG: -deffnm string Set the default filename for all file options main.js:73 DEBUG: -xvg enum xmgrace xvg plot formatting: xmgrace, xmgr or none main.js:73 DEBUG: -[no]pd bool no Use particle decompostion main.js:73 DEBUG: -dd vector 0 0 0 Domain decomposition grid, 0 is optimize main.js:73 DEBUG: -ddorder enum interleave DD node order: interleave, pp_pme or cartesian main.js:73 DEBUG: -npme int -1 Number of separate nodes to be used for PME, -1 main.js:73 DEBUG: is guess main.js:73 DEBUG: -nt int 4 Total number of threads to start (0 is guess) main.js:73 DEBUG: -ntmpi int 0 Number of thread-MPI threads to start (0 is guess) main.js:73 DEBUG: -ntomp int 0 Number of OpenMP threads per MPI process/thread main.js:73 DEBUG: to start (0 is guess) main.js:73 DEBUG: -ntomp_pme int 0 Number of OpenMP threads per MPI process/thread main.js:73 DEBUG: to start (0 is -ntomp) main.js:73 DEBUG: -pin enum auto Fix threads (or processes) to specific cores: main.js:73 DEBUG: auto, on or off main.js:73 DEBUG: -pinoffset int 0 The starting logical core number for pinning to main.js:73 DEBUG: cores; used to avoid pinning threads from different mdrun instances to the same core main.js:73 DEBUG: different mdrun instances to the same core main.js:73 DEBUG: -pinstride int 0 Pinning distance in logical cores for threads, main.js:73 DEBUG: use 0 to minimize the number of threads per physical core main.js:73 DEBUG: physical core main.js:73 DEBUG: -gpu_id string List of GPU device id-s to use, specifies the main.js:73 DEBUG: per-node PP rank to GPU mapping main.js:73 DEBUG: -[no]ddcheck bool yes Check for all bonded interactions with DD main.js:73 DEBUG: -rdd real 0 The maximum distance for bonded interactions with main.js:73 DEBUG: DD (nm), 0 is determine from initial coordinates main.js:73 DEBUG: -rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate main.js:73 DEBUG: -dlb enum auto Dynamic load balancing (with DD): auto, no or yes main.js:73 DEBUG: -dds real 0.8 Minimum allowed dlb scaling of the DD cell size main.js:73 DEBUG: -gcom int -1 Global communication frequency main.js:73 DEBUG: -nb enum auto Calculate non-bonded interactions on: auto, cpu, main.js:73 DEBUG: gpu or gpu_cpu main.js:73 DEBUG: -[no]tunepme bool yes Optimize PME load between PP/PME nodes or GPU/CPU main.js:73 DEBUG: -[no]testverlet bool no Test the Verlet non-bonded scheme main.js:73 DEBUG: -[no]v bool no Be loud and noisy main.js:73 DEBUG: -[no]compact bool yes Write a compact log file main.js:73 DEBUG: -[no]seppot bool no Write separate V and dVdl terms for each main.js:73 DEBUG: interaction type and node to the log file(s) main.js:73 DEBUG: -pforce real -1 Print all forces larger than this (kJ/mol nm) main.js:73 DEBUG: -[no]reprod bool no Try to avoid optimizations that affect binary main.js:73 DEBUG: reproducibility main.js:73 DEBUG: -cpt real 15 Checkpoint interval (minutes) main.js:73 DEBUG: -[no]cpnum bool no Keep and number checkpoint files main.js:73 DEBUG: -[no]append bool yes Append to previous output files when continuing main.js:73 DEBUG: from checkpoint instead of adding the simulation part number to all file names main.js:73 DEBUG: part number to all file names main.js:73 DEBUG: -nsteps step -2 Run this number of steps, overrides .mdp file main.js:73 DEBUG: option main.js:73 DEBUG: -maxh real -1 Terminate after 0.99 times this time (hours) main.js:73 DEBUG: -multi int 0 Do multiple simulations in parallel main.js:73 DEBUG: -replex int 0 Attempt replica exchange periodically with this main.js:73 DEBUG: period (steps) main.js:73 DEBUG: -nex int 0 Number of random exchanges to carry out each main.js:73 DEBUG: exchange interval (N^3 is one suggestion). -nex zero or not specified gives neighbor replica main.js:73 DEBUG: zero or not specified gives neighbor replica exchange. main.js:73 DEBUG: exchange. main.js:73 DEBUG: -reseed int -1 Seed for replica exchange, -1 is generate a seed main.js:73 DEBUG: -[no]ionize bool no Do a simulation including the effect of an X-Ray main.js:73 DEBUG: bombardment on your system main.js:73 DEBUG: main.js:73 DEBUG: core: steps: 264000 -> 288000 main.js:73 DEBUG: Reading file Using Compiled acceleration: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 event.returnValue is deprecated. Please use the standard event.preventDefault() instead. jquery-1.10.2.min.js:3
Okay, I have saved the log file for an entire run, 8852 lines. Do you want the entire log file or something more specific?
This is what I am getting on Windows 8 64-bit with Chrome 32.0.1700.107 with an i7-3840QM which is 4 FPUs shared with 8 CPUs where NaCl64 uses only ~25% of the CPU despite it being idle. [3900:2596:0220/145949:INFO:CONSOLE(73)] "DEBUG: WS: {"client_id":"0x5df4059a52f18df3","threads":8,"version":"8.0.0","type":"NACL","os":"NACL","user":"PantherX","team":"69411","passkey":"REDACTED","ts":"2014-02-20T11:59:47Z","ws":"143.89.28.86","project":2981}", source: http://folding.stanford.edu/nacl/js/main.js (73)
Ok, so Chrome is reporting the correct number of cores. That narrows it down. @PantherX are you running on Windows as well. I suspect this is a Windows only issue.
Correct @jcoffland , both my systems are Windows 8 64-bit fully patched.
Unfortunately I don't have a > 2 core Windows machine at my disposal for testing. This is going to take some more investigation.
Looks like I need to setup an alpha testing client so we can try some things out.
I have a true Quad -- Intel Q8300 -- running Windows. Nacl64.exe is using 52-56%, System_idle process is using 30-35%. A couple of chrome.exe processes are using ~2%. Core_17 is using ~1% except that it sometimes briefly uses maybe 18%. This is the newer App_Store version (1.00).
Opening a second client window decreases the idle time to about 10% and the two nacl64 processes now draw 34-38% each so there's a small benefit in CPU utilization. Whether the QRB calls it a PPD benefit or detriment is yet to be determined.
If I were to be given a choice, I would prefer the slider to have one position for each core available (The left being zero and the right being full usage). That way I could use it to explicitly choose the number of cores to fold with.
It has the benefit of making such CPU usage decisions much clearer to all users and the programming logic much simpler.
@P5-133XL I don't agree that he logic is much simpler. Consider that we must still support throttling for single cores machines. We would need both mechanism.
Regardless the problem still remains that even when set to 100% it's not at full utilization. Part of the issue is that Gromacs threads wait on one another. This naturally produces a utilization less than 100%. However, I'm getting about 80% utilization on my 8-core Linux box.
I think I may have fixed this. Please confirm.
I DL'ed the new client from the Google store. Nope, still operates exactly the way it used to. It uses aprox. 2 cores out of 4 while on full.
Not on my machine. I moved it up to Full and it immediately pulled all available CPU, about four cores worth.
Linux Mint 16, Google Chrome beta.
Using the URL (not the Chrome Store App), I finished all the WUs, refreshed it and voila, it works perfectly fine now on Windows 7 and Windows 8 64-bit systems using Chrome version 32.0.1700.107 (Slider is set to Full): 2 CPUs -> CPU Usage is now 97% to 99% when the system was idle. Previously, it was between 75% to 80% 4 CPUs -> CPU Usage is now 97% to 99% when the system is idle. Previously it was roughly 50% 8 CPUs -> CPU Usage is now roughly 90% when the system is idle (both systems have Process Lasso where I have configured the Affinity to 7 CPUs leaving 1 CPU free). Previously, it was between 20% to 25%
Moreover, on my 8 CPU system, I tested the slider settings and they now work perfectly fine: Light -> 20% CPU Usage Medium -> 50% CPU Usage Full -> 88% CPU Usage (this is expected as I have configured the affinity to 7 CPUs only)
I do believe that NaCl64 can use 100% of the CPU if it is set to full and the system is idle, at least on Windows.
Still only using 50% on my Win 7-32 4 core i3 on the NaCl page using v101.
Strangely, if I start up a second tab with another NaCl page, the performance drops considerably on the first tab. 16 minutes eta to 24 minutes. And 24 minutes on the 2nd tab as well. How does that happen when only using 50%?
This lingering problem appears to be only when running on 32 bit Windows systems, as PantherX pointed out in the forum.
Windows 7 Enterprise, 32 bit, latest everything. Chrome.exe uses 50% Windows 7 Enterprise, 64 bit, latest everything. NaCl64.exe uses 90%
Can some one using 32-bit Windows post a log. I'm looking for the line that reports the number of threads detected by Chrome.
Seems to be fixed.
Win 7 32 bit, i3-3220 @ 3.3GHz. 2 cores, 4 threads. Uses 50% CPU. Is this the expected result?
Would someone else please confirm on their 32 bit Windows system?
For me, NaCl64 is still getting only 54% utilization. The only thing else running is V7 with a single active slot for the GPU. Core_17 is using 1-2%
*** System **** CPU: Intel(R) Core(TM)2 Quad CPU Q8300 @ 2.50GHz : CPU ID: GenuineIntel Family 6 Model 23 Stepping 10 : CPUs: 4 : Memory: 8.00GiB Free Memory: 6.30GiB Threads: WINDOWS_THREADS : OS Version: 6.1
: OS: Windows 7 Home Premium : OS Arch: AMD64 : GPUs: 2 GPU 0: UNSUPPORTED: R9600 Pro primary (Asus OEM for HP) GPU 1: ATI:5 Bonaire XT [Radeon HD 7790]
DEBUG: Config: user = borden.b main.js:73 DEBUG: Config: team = 131 main.js:73 DEBUG: Config: passkey = **** main.js:73 DEBUG: Config: power = full main.js:73 DEBUG: NaCl module loading main.js:73 DEBUG: Status: downloading: Downloading the Folding@home software in your Web browser. On your first visit this can take awhile. main.js:73 DEBUG: load progress: 0.0% (0 of 18000000 bytes) main.js:73 DEBUG: load progress: 0.0% (0 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (1047432 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (2399668 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (3276106 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (4157092 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (5177122 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (6239927 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (7176192 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (8054988 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (9231454 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (10159134 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11040535 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11181643 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11903589 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (12681216 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (13628178 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (14516224 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (15690493 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (16856427 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 100.0% (17125192 of 17125192 bytes) main.js:73 DEBUG: stats: {"team_rank":140,"earned":52739416,"url":"http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpage&username=borden.b","contributed":52739416,"team_url":"http://www.thegenomecollective.com","team_urllogo":"http://www.thegenomecollective.com/tgclogostanford.jpg","team_name":"The Genome Collective","team_total":323809581} main.js:73 DEBUG: NaCl module loaded main.js:73 DEBUG: NaCl module responded main.js:73 DEBUG: Status: downloading: Requesting a work server assignment. main.js:73 DEBUG: Status: downloading: Requesting a work server assignment. main.js:73 DEBUG: WS: {"client_id":"0x7019a9e252faa44f","threads":4,"version":"8.0.0","type":"NACL","os":"NACL","user":"borden.b","team":"131","passkey":"censored)","ts":"2014-02-26T05:11:51Z","ws":"143.89.28.86","project":2981} main.js:73 DEBUG: Status: downloading: Downloading a work unit. main.js:73 DEBUG: Status: downloading: Downloading a work unit. main.js:73 DEBUG: WU: {"client_id":"0x7019a9e252faa44f","threads":4,"version":"8.0.0","type":"NACL","os":"NACL","user":"borden.b","team":"131","passkey":"censored)","ts":"2014-02-26T05:11:51Z","ws":"143.89.28.86","project":2981,"server_version":703,"core":176,"core_version":227,"unit_id":"0x000001e90893a18a52e6da7dce373590","run":0,"clone":18,"gen":428,"wu_ts":"2014-02-26T05:11:52Z","deadline":"2014-02-27T05:11:52Z","timeout":"2014-02-26T07:35:52Z","credit":10,"compression":"bzip2","checksum":"64su8d6VCbc/h+VK1rH+gECAq3jtkNgl/Zv9ecq49oo="} main.js:73 DEBUG: Status: running: Starting work unit. main.js:73 DEBUG: core: checksum verified main.js:73
DEBUG: core: unpacking: frame428.tpr main.js:73 DEBUG: :-) G R O M A C S (-: main.js:73 DEBUG: Groningen Machine for Chemical Simulation main.js:73 DEBUG: :-) VERSION 4.6.5 (-: main.js:73 DEBUG: Contributions from Mark Abraham, Emile Apol, Rossen Apostolov, Herman J.C. Berendsen, Aldert van Buuren, Pär Bjelkmar, Rudi van Drunen, Anton Feenstra, Gerrit Groenhof, Christoph Junghans, Peter Kasson, Carsten Kutzner, Per Larsson, Pieter Meulenhoff, Teemu Murtola, Szilard Pall, Sander Pronk, Roland Schulz, Michael Shirts, Alfons Sijbers, Peter Tieleman, main.js:73 DEBUG: Berk Hess, David van der Spoel, and Erik Lindahl. main.js:73 DEBUG: Copyright (c) 1991-2000, University of Groningen, The Netherlands. Copyright (c) 2001-2012,2013, The GROMACS development team at Uppsala University & The Royal Institute of Technology, Sweden. check out http://www.gromacs.org for more information. main.js:73 DEBUG: main.js:73 DEBUG: :-) Gromacs (-: main.js:73 DEBUG: Option------------------------------------------------------------ main.js:73 DEBUG: -s frame428.tpr Input Run input file: tpr tpb tpa -o traj.trr Output Full precision trajectory: trr trj cpt -x traj.xtc Output, Opt. Compressed trajectory (portable xdr format)-cpi state.cpt Input, Opt. Checkpoint file-cpo state.cpt Output, Opt. Checkpoint file -c confout.gro Output Structure file: gro g96 pdb etc. -e ener.edr Output Energy file -g md.log Output Log file-dhdl dhdl.xvg Output, Opt. xvgr/xmgr file-field field.xvg Output, Opt. xvgr/xmgr file-table table.xvg Input, Opt. xvgr/xmgr file-tabletf tabletf.xvg Input, Opt. xvgr/xmgr file-tablep tablep.xvg Input, Opt. xvgr/xmgr file-tableb table.xvg Input, Opt. xvgr/xmgr file-rerun rerun.xtc Input, Opt. Trajectory: xtc trr trj gro g96 pdb cpt-tpi tpi.xvg Output, Opt. xvgr/xmgr file-tpid tpidist.xvg Output, Opt. xvgr/xmgr file -ei sam.edi Input, Opt. ED sampling input -eo edsam.xvg Output, Opt. xvgr/xmgr file -j wham.gct Input, Opt. General coupling stuff -jo bam.gct Output, Opt. General coupling stuff-ffout gct.xvg Output, Opt. xvgr/xmgr file-devout deviatie.xvg Output, Opt. xvgr/xmgr file-runav runaver.xvg Output, Opt. xvgr/xmgr file -px pullx.xvg Output, Opt. xvgr/xmgr file -pf pullf.xvg Output, Opt. xvgr/xmgr file -ro rotation.xvg Output, Opt. xvgr/xmgr file -ra rotangles.log Output, Opt. Log file -rs rotslabs.log Output, Opt. Log file -rt rottorque.log Output, Opt. Log file-mtx nm.mtx Output, Opt. Hessian matrix -dn dipole.ndx Output, Opt. Index file-multidir rundir Input, Opt., Mult. Run directory-membed membed.dat Input, Opt. Generic data file -mp membed.top Input, Opt. Topology file -mn membed.ndx Input, Opt. Index file main.js:73 DEBUG: Option------------------------------------------------------ main.js:73 DEBUG: -[no]h bool no Print help info and quit main.js:73 DEBUG: -[no]version bool no Print version info and quit main.js:73 DEBUG: -nice int 0 Set the nicelevel main.js:73 DEBUG: -deffnm string Set the default filename for all file options main.js:73 DEBUG: -xvg enum xmgrace xvg plot formatting: xmgrace, xmgr or none main.js:73 DEBUG: -[no]pd bool no Use particle decompostion main.js:73 DEBUG: -dd vector 0 0 0 Domain decomposition grid, 0 is optimize main.js:73 DEBUG: -ddorder enum interleave DD node order: interleave, pp_pme or cartesian main.js:73 DEBUG: -npme int -1 Number of separate nodes to be used for PME, -1 main.js:73 DEBUG: is guess main.js:73 DEBUG: -nt int 4 Total number of threads to start (0 is guess) main.js:73 DEBUG: -ntmpi int 0 Number of thread-MPI threads to start (0 is guess) main.js:73 DEBUG: -ntomp int 0 Number of OpenMP threads per MPI process/thread main.js:73 DEBUG: to start (0 is guess) main.js:73 DEBUG: -ntomp_pme int 0 Number of OpenMP threads per MPI process/thread main.js:73 DEBUG: to start (0 is -ntomp) main.js:73 DEBUG: -pin enum auto Fix threads (or processes) to specific cores: main.js:73 DEBUG: auto, on or off main.js:73 DEBUG: -pinoffset int 0 The starting logical core number for pinning to main.js:73 DEBUG: cores; used to avoid pinning threads from different mdrun instances to the same core main.js:73 DEBUG: different mdrun instances to the same core main.js:73 DEBUG: -pinstride int 0 Pinning distance in logical cores for threads, main.js:73 DEBUG: use 0 to minimize the number of threads per physical core main.js:73 DEBUG: physical core main.js:73 DEBUG: -gpu_id string List of GPU device id-s to use, specifies the main.js:73 DEBUG: per-node PP rank to GPU mapping main.js:73 DEBUG: -[no]ddcheck bool yes Check for all bonded interactions with DD main.js:73 DEBUG: -rdd real 0 The maximum distance for bonded interactions with main.js:73 DEBUG: DD (nm), 0 is determine from initial coordinates main.js:73 DEBUG: -rcon real 0 Maximum distance for P-LINCS (nm), 0 is estimate main.js:73 DEBUG: -dlb enum auto Dynamic load balancing (with DD): auto, no or yes main.js:73 DEBUG: -dds real 0.8 Minimum allowed dlb scaling of the DD cell size main.js:73 DEBUG: -gcom int -1 Global communication frequency main.js:73 DEBUG: -nb enum auto Calculate non-bonded interactions on: auto, cpu, main.js:73 DEBUG: gpu or gpu_cpu main.js:73 DEBUG: -[no]tunepme bool yes Optimize PME load between PP/PME nodes or GPU/CPU main.js:73 DEBUG: -[no]testverlet bool no Test the Verlet non-bonded scheme main.js:73 DEBUG: -[no]v bool no Be loud and noisy main.js:73 DEBUG: -[no]compact bool yes Write a compact log file main.js:73 DEBUG: -[no]seppot bool no Write separate V and dVdl terms for each main.js:73 DEBUG: interaction type and node to the log file(s) main.js:73 DEBUG: -pforce real -1 Print all forces larger than this (kJ/mol nm) main.js:73 DEBUG: -[no]reprod bool no Try to avoid optimizations that affect binary main.js:73 DEBUG: reproducibility main.js:73 DEBUG: -cpt real 15 Checkpoint interval (minutes) main.js:73 DEBUG: -[no]cpnum bool no Keep and number checkpoint files main.js:73 DEBUG: -[no]append bool yes Append to previous output files when continuing main.js:73 DEBUG: from checkpoint instead of adding the simulation part number to all file names main.js:73 DEBUG: part number to all file names main.js:73 DEBUG: -nsteps step -2 Run this number of steps, overrides .mdp file main.js:73 DEBUG: option main.js:73 DEBUG: -maxh real -1 Terminate after 0.99 times this time (hours) main.js:73 DEBUG: -multi int 0 Do multiple simulations in parallel main.js:73 DEBUG: -replex int 0 Attempt replica exchange periodically with this main.js:73 DEBUG: period (steps) main.js:73 DEBUG: -nex int 0 Number of random exchanges to carry out each main.js:73 DEBUG: exchange interval (N^3 is one suggestion). -nex zero or not specified gives neighbor replica main.js:73 DEBUG: zero or not specified gives neighbor replica exchange. main.js:73 DEBUG: exchange. main.js:73 DEBUG: -reseed int -1 Seed for replica exchange, -1 is generate a seed main.js:73 DEBUG: -[no]ionize bool no Do a simulation including the effect of an X-Ray main.js:73 DEBUG: bombardment on your system main.js:73 DEBUG: main.js:73 DEBUG: core: steps: 10272000 -> 10296000 main.js:73 DEBUG: Reading file Using Compiled acceleration: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: main.js:73 DEBUG: Can not set thread affinities on the current platform. On NUMA systems this main.js:73 DEBUG: can cause performance degradation. If you think your platform should support setting affinities, contact the GROMACS developers. main.js:73 DEBUG: setting affinities, contact the GROMACS developers. main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: Status: running: Calculations underway. main.js:73 DEBUG: starting mdrun '10296000 main.js:73 event.returnValue is deprecated. Please use the standard event.preventDefault() instead. jquery-1.10.2.min.js:3 DEBUG: Finishing current work unit main.js:73 DEBUG: core: fcRequestCheckPoint() main.js:73 DEBUG: core: fcRequestCheckPoint() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: core: fcRequestCheckPoint() main.js:73 DEBUG: core: fcRequestCheckPoint() main.js:73 DEBUG: core: fcCheckPointParallel() main.js:73 DEBUG: main.js:73 DEBUG: Writing final coordinates. main.js:73 DEBUG: main.js:73 DEBUG: Average load imbalance: 2.3 % main.js:73 DEBUG: Part of the total run time spent waiting due to load imbalance: 1.0 % main.js:73 DEBUG: Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0 % Y 0 % main.js:73 2 DEBUG: main.js:73 DEBUG: Performance: main.js:73 DEBUG: core: packing: md.log 20785 main.js:73 DEBUG: core: packing: traj.trr 806448 main.js:73 DEBUG: Status: uploading: Uploading results. main.js:73 DEBUG: Status: uploading: Uploading results. main.js:73 DEBUG: stats: {"team_rank":140,"earned":52739416,"url":"http://fah-web.stanford.edu/cgi-bin/main.py?qtype=userpage&username=borden.b","contributed":52739416,"team_url":"http://www.thegenomecollective.com","team_urllogo":"http://www.thegenomecollective.com/tgclogostanford.jpg","team_name":"The Genome Collective","team_total":323809581} main.js:73 NativeClient: NaCl module crashed folding.stanford.edu/:1 DEBUG: Module exit main.js:73 DEBUG: NaCl module loading main.js:73 DEBUG: Status: downloading: Downloading the Folding@home software in your Web browser. On your first visit this can take awhile. main.js:73 DEBUG: load progress: 0.0% (0 of 18000000 bytes) main.js:73 DEBUG: load progress: 0.0% (0 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (1356174 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (2311750 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (3276106 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (4235078 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (5079040 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (5865472 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (6651904 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (7426657 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (8315067 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (9484325 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (10059776 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11181643 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (11685413 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (12629212 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (13467648 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (14185601 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (15460386 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 0.0% (16473977 of 18446744073709552000 bytes) main.js:73 DEBUG: load progress: 100.0% (17125192 of 17125192 bytes) main.js:73 DEBUG: NaCl module loaded main.js:73 DEBUG: NaCl module responded main.js:73 DEBUG: Config: paused = true main.js:73 DEBUG: Config: deleted paused main.js:73 DEBUG: Status: downloading: Requesting a work server assignment. main.js:73 DEBUG: Status: finished: Folding finished, exit the browser or close this page to shutdown Folding@home or press the start button to resume folding. main.js:73
Observation 1: My passkey was masked once. I had to censor my passkey twice. Observation 2: It looks like you're seeking a WS before the Pause. It might not still be active if I resume folding some time later.
I was seeing the low CPU usage as well on some of my systems. I completely removed chrome, reinstalled it and deleted then reinstalled the app. The CPU usage went up to around 90% on every system where I used this procedure.
I've noticed that no matter how many cores are available on the CPU, this client seems to only use two per instance at full. Isn't Full supposed to use 100% of the CPU?