Open dozer75 opened 1 year ago
This also happens if you use Argo Workflows and try to inject in the workflows. The proxy keeps running while all containers have exited. The Argo documentation describes how they handle injected sidecars. The issue seems to be that they try to send a kill signal using kubectl exec
and that fails. There is a way how to customize it but the azwi-proxy is based on a very thin linux distroless base image that has no shell.
One option to resolve this, at least from my point of view, would be to compile the proxy with the option to terminate itself.
Took me days to figure this one out. As @san7hos mentions, if you could issue a pkill to the sidecar proxy, then that should do it. But the base image uses barebones distroless. The azwi webhook helm chart also doesn't really give you much options to change the proxy image even if you decided to build your own.
I considered other options to gracefully kill the sidecar. One potential was to use the OpenKruise Job Sidecar Terminator. But in order for that to work, the proxy container needs an environment variable injected. Again, The azwi webhook helm chart also doesn't really give you any options to do so.
Like always, I had to scour the depths of the internet for bits and pieces of poorly written Azure documentation that are sprawled here and there, put them together, to figure out a solution. The solution was actually to properly use azwi as intended. Rather than use the proxy sidecar to intercept the IDMS endpoint when odbc tries to authenticate via the Msi method, just use the projected service account token to authenticate.
I used msal to get the token, followed this half-baked solution, and authenticate to the database via access token.
Here is sample code I tested with on a pod without a sidecar. Disclaimer: I only tested on pyodbc.
If the mutating webhook controller used native sidecar containers then I think it would resolve this (plus some annoying issues with the proxy being the default container). I might raise a PR
Describe the bug
We have some jobs and cronjobs that is running in AKS that connects to an Azure SQL database using ODBC. We are planning to use Managed Identity and workload identity to do the authentication in the ODBC driver, and for this we need to use the injection of the proxy sidecar (for some reason).
But, by doing this, the job won't end after the job container has successfully completed since the sidecar proxy is still alive after our container is done.
The job pod is in state
NotReady
as the proxy container is still running.Here is the dump of the pod:
Steps to reproduce
azure.workload.identity/inject-proxy-sidecar
annotation totrue
Expected behavior The best would of course be that ODBC works with the default flow, but somehow it doesn't so that we need to use the sidecar.
The sidecar should be stopped whenever the other container(s) in the pod has been completed enabling the job to complete.
Logs
Environment
kubectl version
): Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"b969368e201e1f09440892d03007c62e791091f8", GitTreeState:"clean", BuildDate:"2022-12-16T19:44:08Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"linux/amd64"}cat /etc/os-release
):uname -a
):Additional context