Closed 1468ca0b-2a64-4fb4-8e52-ea5806644b4c closed 5 years ago
Created by: vvv
I'd prefer not to make h0
dependent on m0vg
if we can help it.
Created by: chumakd
On VirtualBox, the first network interface (eth0) is not accessible outside of VM. The second interface is configured by m0vg
specifically to be accessible among all VMs.
Created by: chumakd
is it necessary for facter to be installed on client1?
@vvv It is. Though, it is installed with m0vg up|provision
automatically on every VM. If facter
is missing on your client1
VM that means it must've been provisioned incorrectly.
Created by: chumakd
It was needed mostly because of m0provision:cmd_genfacts()
func that does sudo m0genfacts
. I think current implementation of m0genfacts
does sudo
internally so the script itself doesn't need to be called with sudo
any more. I suppose we can drop sudo
in front of m0genfacts
and don't generate /root/.ssh/known_hosts
, but that should tested.
Created by: chumakd
done, I've restarted Jenkins job: http://jenkins.mero.colo.seagate.com:8080/job/halon_github_trigger/138/
Created by: vvv
This breaks Jenkins CI. h0 setup
call should be removed from Xperior script.
Created by: vvv
@chumakd Is modification of /root/.ssh/known_hosts
really needed?
Created by: vvv
CLI differences from the traditional (single-node)
h0
script:M0_CLUSTER
environment variable (defaults to~/.m0-cluster.yaml
). Seem0genfacts -h
for the format description.Sample configuration
(Tested with
m0vg
setup.)Known issues
[x] 1. Commit 1be8da5ad3644ea4838151b84c6a1f7a070d9463 breaks Jenkins CI:
TODO: Update Xperior script.
[x] 2.
init
fails:QUESTION: @chumakd, is it necessary for
facter
to be installed onclient1
?[ ] 3. TODO: Rewrite system tests using
h0-new
.[x] 4. This error message should be suppressed:
[mpdsh:269] pdsh -S -f1 -w client1.local,ssu1.local,ssu2.local,cmu.local sudo systemctl start halon-cleanup client1: Job for halon-cleanup.service failed because the control process exited with error code. See "systemctl status halon-cleanup.service" and "journalctl -xe" for details. pdsh@cmu: client1: ssh exited with exit code 1
[erase_cluster_data:335] true [...]