Closed alekssaul closed 6 years ago
Waiting on https://github.com/coreos/tectonic-installer/pull/999 to merge to being work on VMware
Is there any news on this?
We're in the middle of updating the vmware platform now with some additional features. It's unclear at this time if this feature will be added or not. We will accept pull requests if they make sense and don't negatively affect other aspects of the installation.
What's the status of this?
Version 1.8.9 is the latest Tectonic version. You can find the existing VMWare installer there: https://github.com/coreos/tectonic-installer/tree/1.8.9-tectonic.1
Any additional bug-fixes/patches will be applied to this branch: https://github.com/coreos/tectonic-installer/tree/track-1
But we currently do not have any plans to add new features on our roadmap.
I find it a little bizarre that all the trouble went into the VMware vSphere terraform configuration and deployment and the storage driver was NOT included. Seems rather pointless without it, as it doesn't add anything over the PXE install process... and as it stands, Tectonic struggles to NOT break the iSCSI storage driver with each update...
Is the intention that people are running CoreOS without using storage?
Why would they?
If CoreOS plans on supporting any non-cloud implementations (apart from OpenShift now), I don't see how it could be useful without including support for the cloud provider storage drivers... which in this case is the vSphere one mentioned here. @alekssaul seemed to do some good work, I've started down that path myself... but this isn't encouraging.
Tectonic CoreOS needs to improve their VMware support - not leave it broken. As it stands on-prem Tectonic is a bit of a headache.
Sorry for the delay. We're currently working on the next generation of the installer which will integrate Tectonic and Open Shift. We'll consider this for that project, but will not be adding this feature in this repo. We'll post a link to the new repo in the README once it's ready.
See our blog for any additional details: https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift
Thank you for the reply.
We have ended up focusing on the iscsi-provisioner and the nearly native iSCSI storage support in K8s, as it seems to be well supported in the current Tectonic CoreOS (with some minor tweaks).
Hopefully this will also receive consideration in the new offering.
With the iSCSI storage and PXE boot approach we have found CoreOS on VMware to be incredibly solid and performant.
On Jun 15, 2018, at 8:05 AM, Ed Rooth notifications@github.com wrote:
Sorry for the delay. We're currently working on the next generation of the installer which will integrate Tectonic and Open Shift. We'll consider this for that project, but will not be adding this feature in this repo. We'll post a link to the new repo in the README once it's ready.
See our blog for any additional details: https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift https://coreos.com/blog/coreos-tech-to-combine-with-red-hat-openshift — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/coreos/tectonic-installer/issues/51#issuecomment-397631690, or mute the thread https://github.com/notifications/unsubscribe-auth/AAfQoxk-jcOOYnUPUJHCUHU5gq52drI3ks5t878agaJpZM4Mdd6A.
VMware has Kubernetes
--cloud-provider=vsphere
support as described in documentation[0] . Cloud Provider plugins allow mainly integration with VMware's storage stack for block storage, similar to aws-ebs integration. Apart from the block storage integration it also supports self discovery such as nodes adding metadata about their placement in VMware infrastructure.Created a short demo in; https://youtu.be/gfjwwkTYNRQ (needs CoreOS SSO)
Opening the issue to discuss the right way to implement. Per upstream documentation implementation requires: "Provide` the cloud config file to each instance of kubelet, apiserver and controller manager via --cloud-config="
Since kube-api and controller-manager are self-hosted it's possible to inject --cloud-config via Kubernetes secret. However kubelet will require the configuration file to be on disk. Since we would need to deploy this file onto disk, it's probably best to have kube-api and controller-manager use hostpath mounts as well. Since kubelet already mounts /etc/kubernetes I've added /etc/kubernetes/vsphere.conf via ignition_file and update kubelet.service to use
Also have to modify the assets for kube-api and controller-manager pods with
for hostpaths.
/sys/devices/virtual/dmi/id/product_uuid
[0] is needed for vmware code to find itself in the virtual machine infrastructure. This was merged in 1.5.3, and prior to it operator had to manually ad Node's VMware Virtual Machine UUID.Master and Worker nodes also need
disk.enableUUID = "1"
as part of their Terraform custom_configuration_parameters.Happy to submit a PR if above sounds acceptable.
[0] https://kubernetes.io/docs/getting-started-guides/vsphere/ [1] https://github.com/kubernetes/kubernetes/blob/v1.5.3/pkg/cloudprovider/providers/vsphere/vsphere.go#L207-L260