Closed mhl787156 closed 3 years ago
As tested yesterday, all that is required for other users to deploy onto the kubernetes server is the k3s.yaml
file from the server. Please see K3s access controls for full instructions. There needs to be a method to formalise this process.
Suggestion: Have the k3s.yaml file accessible someone on the server machine such that a simple SCP command can be run by any client. Must be updated in the docs.
Also tested yesterday (@rob-clarke can confirm?), any ROS2 node running inside docker using net=host on a non-kubernetees-master machine can see the topics broadcast by the drones running kubernetes Pods networkHost. This may imply that Bare Metal ROS2 may work without additional network configuration, possibly solving #31
On k3s.yaml
, a nicer option would probably be a web server hosting the file. Having a single page with a list of instructions and all files would make setting up a new machine nice. Could even have the CLI grab it automatically if we want to abstract the kubectl
stuff. Also avoid the need to give out passwords to an account.
On the ROS2 stuff, yes that was the behaviour I got. I only tried running ros2 topic echo
and similar so there may be GUID conflicts if more processes are added. Potentially a share host PID space flag is all that would be needed.
For kubernetes, currently all pod/deployment creation is done by running kubectl on the master kubernetes server. However in the lab, it is more likely that the server will be running on one desktop, and users may interact with Starling on either
Investigation needs to be done on how to
It may just be a case of ssh'ing into the server machine and running apply, but it may also be possible to use the kubernetes API server for external control. However kubernetes security measures must also be investigated