I implemented changes to the existing modular recipes to make all components optional including the orchestrator, the artifact store and container registry.
Changes
There's a new external data source powered by a bash script that fetches details about the cluster like the token, cluster ca certificate and the endpoint. A custom solution was needed because:
making the cluster optional means that the kubernetes provider doesn't get any configuration when you disable the cluster.
This means that it can't tear down kubernetes resources any more.
The data source that comes with Terraform has a limitation: it can't handle failures. So it meant that the in the case where there is no cluster deployment, this data source could throw an error and disrupt the plan/apply.
When deploying MLflow, if no bucket is specified, it creates a new one for itself instead of using the artifact store bucket because that might not exist in case the user only wants an experiment tracker. The newly created bucket's name is sent to the outputs and can be used to register an artifact store.
In case of k3d, a container registry is automatically created in case any service that uses the cluster is deployed. This is reflected in the outputs of the recipe.
All output files now reflect the new change and have a default artifact store and orchestrator in case none is deployed. In any deployment case, the idea is for them to still make sense and be a valid stack YAML.
Kubernetes is now a separate option to deploy as an orchestrator.
Description
I implemented changes to the existing modular recipes to make all components optional including the orchestrator, the artifact store and container registry.
Changes
There's a new external data source powered by a bash script that fetches details about the cluster like the token, cluster ca certificate and the endpoint. A custom solution was needed because:
When deploying MLflow, if no bucket is specified, it creates a new one for itself instead of using the artifact store bucket because that might not exist in case the user only wants an experiment tracker. The newly created bucket's name is sent to the outputs and can be used to register an artifact store.
In case of k3d, a container registry is automatically created in case any service that uses the cluster is deployed. This is reflected in the outputs of the recipe.
All output files now reflect the new change and have a default artifact store and orchestrator in case none is deployed. In any deployment case, the idea is for them to still make sense and be a valid stack YAML.
Kubernetes is now a separate option to deploy as an orchestrator.
a host of pending bug fixes