pcuzner / ansible-runner-service

Python project that wraps the ansible_runner 'engine' inside a RESTful API
Other
17 stars 9 forks source link

Need a way to add a host to the inventory #21

Closed pcuzner closed 6 years ago

pcuzner commented 6 years ago

Adding a host to the inventory will need to prep the ssh keys too - the UX I'm looking for is

We may also need to think about doing this async, and passing the caller back a taskid..?

Thoughts?

jmolmo commented 6 years ago

In general i don't like very much the idea of passing passwords/user credentials of the managed cluster as part of any kind of requests in the ansible-runner service.

Not afraid about "man in the middle" attacks or other class of sophisticated ways to get sensitive information, i'm thinking that this will imply that the final user (the one in the keyboard) has to manage this sensitive information, and this is something that usually is not possible, or is very problematic. (People who install the cluster hosts does not provide this kind of information to people that uses the cluster/or a software tool to manage the cluster)

The fact that our tool is intended to be used in single clusters, give us some advantages about how to connect with the hosts in the cluster.

One possibility is "to force" in installation time to create a user for the ansible-runner service and force the copy of the public key of this user to all the hosts in the cluster. ( this will be a task to be executed by administrators)

I think , that the aim of this tool is to be useful and an easy way to execute Ansible playbooks over a cluster of servers, therefore the administration (even existence) of the servers in the network is out of the functionality scope required.

I think that the provision of servers would be a operation "outside" the use of the ansible-runner service. Our service should ONLY maintain the "inventory" file of the cluster ( which is a subset of the available hosts in the network), and what the API is really doing is CRUD ops over the inventory file(1), and only in the Creation operation to validate the access to the host.

In this scenario, the installation of a new host have to be performed previously (as is used to be), and this task must include the copy of our ansible-runner service public key in the ssh authorized keys folder.

Then, the workflow when adding a new host would be:

Another good point in this approximation is that we make happy people installing the cluster, and we remove responsibility of the people that uses the cluster.

Respect the "async" way of work ... i think that in the case of hosts management, it does not provide too much advantages to the final user, probably the final user is more interested in to have a clear vision of the cluster composition (inventory), and in the immediate result of operations that modifies this composition. With the approximation i have proposed, the hosts in the cluster must be ready to be managed, so any CRUD operation will be immediate.

(1): Operations over host should allow us to include in our inventory any kind of Ansible feature already available in this kind of files, for example tag hosts, group hosts , etc ... This is not needed now , but probably we should leave open this way.

pcuzner commented 6 years ago

Sounds reasonable. Let's start with that, and see if we get push back.

I like the idea of responding with our pub key if the ssh connect fails.

pcuzner commented 6 years ago

I've got most of this in place now in the host-add branch

jmolmo commented 6 years ago

It is ok!! I have include a couple comments i hope will be useful.