Closed logsdail closed 11 months ago
@ikowalec please have a quick flick through. I’m not 100% happy but we have separated the dictionaries. Suggest we merge and then downstream can redress the issue about assigning nodes_per_instance for non-task farmed calculation. Needs method to get cpu /node count requested at runtime on archer2/isambard/young
@ikowalec please have a quick flick through. I’m not 100% happy but we have separated the dictionaries. Suggest we merge and then downstream can redress the issue about assigning nodes_per_instance for non-task farmed calculation. Needs method to get cpu /node count requested at runtime on archer2/isambard/young
Dictionaries can be merged into a nested dictionary as per: hpc = {"hawk": {"cpu_command": "hawk_command_here", "cpus_per_node": 40}, {"hawk_amd": ....}...}
Merging #132 (ed6ba62) into master (55bd48d) will increase coverage by
0.01%
. The diff coverage is91.30%
.
@@ Coverage Diff @@
## master #132 +/- ##
==========================================
+ Coverage 86.03% 86.04% +0.01%
==========================================
Files 69 69
Lines 2692 2709 +17
==========================================
+ Hits 2316 2331 +15
- Misses 376 378 +2
Flag | Coverage Δ | |
---|---|---|
unittests | 86.04% <91.30%> (+0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Files | Coverage Δ | |
---|---|---|
examples/run_aims.py | 95.65% <ø> (ø) |
|
carmm/run/aims_path.py | 94.59% <91.30%> (-5.41%) |
:arrow_down: |
@ikowalec please have a quick flick through. I’m not 100% happy but we have separated the dictionaries. Suggest we merge and then downstream can redress the issue about assigning nodes_per_instance for non-task farmed calculation. Needs method to get cpu /node count requested at runtime on archer2/isambard/young
Dictionaries can be merged into a nested dictionary as per: hpc = {"hawk": {"cpu_command": "hawk_command_here", "cpus_per_node": 40}, {"hawk_amd": ....}...}
Nice - implemented this for retrieving the one CPU setup line needed.
@ikowalec fingers crossed I've updated appropriately. We have an obvious todo arising which is to scrape the nodes/tasks from the system environment, but that can be a new issue.
Looks like a good hack, improves versatility.
Re: scraping, I agree that automated retrieving of node and cpu count should be the next step. There might be an extra layer of complexity where hyperthreading is in place and is not desirable for FHI-aims.
Tidying and restructure to better distinguish task-farming setup of calculations. Gives foundations for future work to update process with less hard-coded variables.