Closed chrisjsewell closed 3 years ago
do you think this makes sense, and/or will it even effect build times?
Looks faster to me 😄
before (91b7e51, 1 cpu):
marvel-nccr.siesta : Make siesta executable --------------------------- 191.87s
after (5b6c60d, 2 cpus):
marvel-nccr.siesta : Make siesta executable --------------------------- 102.61s
Yeh that makes the CI run a lot quicker! So I'm going to merge now, to get the into QM, but let me know if you see any issues @albgar
@chrisjsewell: The new cpu setting looks fine for the CI, but I wonder what happens if the QM being built has only one cpu.
no this is the reason to have it, so you specify directly the number of CPUs the VM has in playbook-build.yml
:
- role: marvel-nccr.libxc # this is used by abinit and siesta
tags: [abinit,siesta]
vars:
libxc_build_cpus: "{{ vm_cpus }}"
- role: marvel-nccr.siesta
tags: [siesta]
vars:
siesta_build_cpus: "{{ vm_cpus }}"
siesta_libxc_root: "{{ libxc_prefix }}"
- role: marvel-nccr.abinit
tags: [abinit]
vars:
abinit_build_cpus: "{{ vm_cpus }}"
abinit_libxc_path: "{{ libxc_prefix }}"
it will then adapt to how many you set in inventory.yml
; 1, 2 or even more
Just yesterday I was testing the new ansible-galaxy siesta role on a clean VM, and I got a couple of errors:
(on Ubuntu 20) some failures in apt-get. The solution I found was to add the equivalent of an 'apt update' at the beginning of my apt task. Now I see that the Quantum Mobile has an "ansible_prerequisites" role that does that, among other things. So this might be needed for my VM, but not for the QM. In any case, I do not think it hurts, so I will put it in a forthcoming merge request.
The Siesta tests were failing due to not finding a shared library. This is because the ldconfig was done after the tests... I have fixed it now.
I wonder why the CI did not catch this. Does it have to do with the fact that in the molecule setup for the CI the fail-fast: false
option is used?.
For the first issue, it could be that in Docker the apt problem does not happen...
Both things are fixed now. I am going to make another pull request.
Does it have to do with the fact that in the molecule setup for the CI the fail-fast: false option is used?.
No this just means that if it fails for one platform (e.g. Ubuntu 16) it will not immediately stop running the parallel tests for the other platforms
The solution I found was to add the equivalent of an 'apt update' at the beginning of my apt task
you don't need a separate task, you can just add to the first apt task:
apt:
update_cache: yes
cache_valid_time: 86400
...
i.e. update before installing, but only if the cache has not been updated in the last day
@albgar are you doing the PR now/soon? Because I'm very close to starting a new QM build 😄
@albgar I've added something similar to some other roles. do you think this makes sense, and/or will it even effect build times?