Closed jrx closed 6 years ago
Nice integration, but know I have a couple of questions: (:
Why did you delete the separation of group_vars/all to group_vars/all/vars & vault? This separation is not only needed for getting ssh access on-premises via ansible, it is a general approach for further customizing, too.
With the use of the Ansible module to connect the dcos cli to an Open Source DC/OS cluster, the user must take action during the ansible play after fireing the dcos auth login
command, without getting notified. Why don't use the ansible mail
and/or prombt
function for this purpose?
There could be further tasks which depends on a completed kubernetes package installation (with all tasks running). Wouldn't it be better to ensure all tasks are running after fireing the package installation command?
Therefore something like this could be used:
- name: ensure deployment of kubernetes cluster on DC/OS as a service is completed
shell: dcos kubernetes plan status deploy
register: dcos_k8s_deploy_state
until: dcos_k8s_deploy_state.stdout_lines[0].find('COMPLETE') != -1
retries: 120
delay: 30
Thanks @rembik for your thoughts.
- Why did you delete the separation of group_vars/all to group_vars/all/vars & vault? This separation is not only needed for getting ssh access on-premises via ansible, it is a general approach for further customizing, too.
I just went with the single groups_vars/all/var
file again to reduce complexity. The other approach broke for me when I hadn't ansible-vault configured and didn't need to prepare my nodes for SSH. But you're probably right, we should reintroduce a more fine-grained groups_vars
folder in the future.
- With the use of the Ansible module to connect the dcos cli to an Open Source DC/OS cluster, the user must take action during the ansible play after fireing the dcos auth login command, without getting notified. Why don't use the ansible mail and/or prombt function for this purpose?
The prompt also broke for me (probably because I use a Mac) since it tried to install some python packages pexpect
and the DC/OS CLI with root
permissins on my local system. The new approach shouldn't ask you again if you are already connected to the cluster, works the same with DC/OS OSS and Enterprise and downloads binaries that are needed only to the local /deploy
folder. So no root
permissions to my local machine needed.
The problem in general is (other than with DC/OS Enterprise with Service Accounts) it's not so nice to automate the login process to DC/OS OSS. :(
- There could be further tasks which depends on a completed kubernetes package installation (with all tasks running). Wouldn't it be better to ensure all tasks are running after fireing the package installation command?
Therefore something like this could be used:
- name: ensure deployment of kubernetes cluster on DC/OS as a service is completed shell: dcos kubernetes plan status deploy register: dcos_k8s_deploy_state until: dcos_k8s_deploy_state.stdout_lines[0].find('COMPLETE') != -1 retries: 120 delay: 30
Agree, this is sometimes useful. I think you could still implement just a check as part of your playbooks.
Install DC/OS Packages with Ansible and use Kubernetes as the first example
Based on the idea and work of @dirkjonker and @rembik. This is the first approach to install DC/OS packages by leveraging an Ansible module.
Changes:
@rimas Would you like to test this PR and see if it still works for you as expected?