Open gclawes opened 1 year ago
I support the idea, and also wanted something adjacent to this functionality. I think this can be implemented by providing a general command in host CLI to mount node's boot and data partitions in current directory, where afterwards you can copy user-data
& meta-data
files, edit config.txt, cmdline.txt etc.
What do you think?
cc @svenrademakers
I think being able to pass files in without having to mount/edit manually would help with things like automated provisioning (I'm thinking future features like Kubernetes ClusterAPI providers would benefit from this).
Similar workflow to how OpenStack Ironic's "configuration drive" feature: https://docs.openstack.org/ironic/latest/install/configdrive.html
Another thing to keep in mind is coreos ignition. It doesn't us user-data
or meta-data
like cloud-init, it looks like bare-metal is intended to get configs over the network: https://coreos.github.io/ignition/supported-platforms/
How can we setup this in a target agnostic way? E.g. now we need to mount /boot paritions, maybe in the future we want to execute additional scripts or set certain permissions.
I would like to look into if we can build cloud-init for our firmware. In particular the configuration part. Then we don't have to reinvent the wheel. Im envisioning the ability to pass a could-init config yaml to cli's flashing command. Secondly, are there things we want to support, but are not possible with could-init?
Security should be considered here as well, its a vulnerable area, where intruder could get full control over the target.
Lastly, we need to think about how we can use the webserver to present this in a user friendly way.
@ruslashev I think the ability to expose EMMC/SDcard storage to the outside would be an excellent feature as well. I was thinking if it would make sense to expose storage as NFS or NBD
Would this be exposing the entire eMMC/SDcard to the modules? Or just a subdirectory or virtual block device?
If I recall correctly, NBD would only work for read-only if you wanted to support multiple modules at once, for read-write it would have to be NFS if you wanted to support multiple modules accessing it.
I supposed you could expose separate per-module config drives as well for cloud-init/ignition purposes.
You could also add the IP addresses 169.254.169.254 or more modern the IPv6 address fe80::a9fe:a9fe on the BMC itself, so the images with cloud init could fetch the data via network from configs prepared to be served via http from the turingPi itself. This way you do not need to tinker with the flashed image during the process. This way the $IPADDRESS/meta-data/ and /meta-data/ endpoints just need to be serve prepared configuration files depending on the MAC or IP of the SBC itself. This reduces the complexity instead of modifying /boot on CM4s and also might other SBC with different partition schemes, it just needs cloud-init in the image flashed, like in Ubunu images for RPIs.
Yeah, a metadata service on the BMC would be cool
For operating systems that use cloud-init (ubuntu, etc) it would be useful to support injecting cloud-init files into the boot partition.
HypriotOS has a
flash
tool that does this: https://github.com/hypriot/flashIt's a simple matter of mounting the boot partition and copying given
user-data
andmeta-data
files: https://github.com/hypriot/flash/blob/master/flash#L686-L687 https://github.com/hypriot/flash/blob/master/flash#L704-L712The
flash
tool also uses this approach for other config file (config.txt
, etc)