Azure / azure-dev

A developer CLI that reduces the time it takes for you to get started on Azure. The Azure Developer CLI (azd) provides a set of developer-friendly commands that map to key stages in your workflow - code, build, deploy, monitor, repeat.
https://aka.ms/azd
MIT License
396 stars 190 forks source link

Usage of ARM as `containerapp` code deployment mechanism #1935

Open weikanglim opened 1 year ago

weikanglim commented 1 year ago

There are some current problems with containerapp implementation that stem from using ARM / Bicep as a code deployment mechanism.

We've noticed a few downsides:

  1. Bug: https://github.com/Azure/azure-dev/issues/1720 - Container app's usage of ARM deployment state is not recovered.
  2. Bug: https://github.com/Azure/azure-dev/issues/1336 - It is easy for a user to reincorporate their ARM deployment that updates the containerapp revision, to also be included in their infra provisioning. This can lead to issues described if done incorrectly, as the current ACA ToDo templates are currently written.
  3. UX: A bicep module to deploy a containerapp resource is complicated. The user has to write an entire bicep module, if they simply wanted to deploy a container image with no additional changes. This is simplified with our template story, but ideally this can be simplified further. Users should be able to deploy a container image without writing more infrastructure-as-code files.
  4. UX: A bicep module to deploy a containerapp resource is too flexible, and the contract to azd is too loose. There is very little validation azd can do to ensure that a containerapp service is written in a way that guarantees code deployment even when the bicep succeeds. It is hard for azd to guide the user to the correct steps to fix the bicep template.
  5. UX: It is hard to debug when a containerapp deployment fails, because it could fail from simpler things such as bicep validation that is not pertinent to the actual code deployment, or due to missing loose contractual requirements between the bicep module to deploy the revision and azd.
  6. UX: The containerapp deployment implementation is drastically different than deployments for other hosts. This can be a learning curve for users. This also increases product burden over time.
  7. Performance: The deployment times are slower (anecdotal, ~1 minute for a simple image revision change, compared to ~seconds) when ARM deployment is used.

A different strategy may be to use the ARM REST API as the direct deployment mechanism. This would be similar to how az containerapp update works, which involves calling into the container apps REST API for updating a revision. We'd need to also support a configuration file (similar to az containerapp update --yaml <config file> that applies update revision update properties).

From azd's point of view, integrating directly with the ARM REST API makes the code deployment strictly a code deployment operation which aligns with all other host-specific deployment mechanisms. which solves all the complexities and challenges presented above. It provides the user with a more streamlined model of how code should be deployed to containerapp services.

Some challenges to consider as we explore:

  1. How does a user orchestrate more complex revision updates in this model? In the bicep model, the user had full control, and I suspect this created the ability to create more complex code deployment rules. This is probably rarely used in-practice currently, but we should consider what the long-term story is.
  2. The container app resource is different than other hosting models, where the Container App resource itself is the running container (which is the code artifact being deployed). This may create some confusion of whether the Container App resource itself should be created in infra, or implicitly via code deployment. This requires some thought but can be solved with some implementation details.
jongio commented 1 year ago

We should look into a GitOps pattern for all k8s desired state type deployments.