Closed genadyk closed 5 years ago
This is a valuable ask and would argue this is a must for v1.0 @jeffhollan
It's throwing the error today for correctness since we can't tell which container to load the envVars from. I'm not sure about prefixing the secret name with the container name. We will still have to figure out which container name to pick.
Container[0]
, though I'm not sure if kubernetes guarantees the order. map[string]map[string]string
and let the scalers deal with it, though that just shifts the problem down one level. We could introduce containerName into the ScaledObject spec.
This way behavior is clear and users are well aware of what to expect.
After thinking about this again I think I'm not getting the point. (not used to Go)
But conceptually I think KEDA should be scaling the deployment which will implicitly scale the whole pod with all containers inside of it.
Is this how it works as of today? If so, what is your concern @genadyk?
The issue is keda will not do that if your deployment is for a pod that contains more than 1 container
Each container in a pod gets to have its own set of environment variables and secrets. Keda needs to be able to resolve the secrets (connection strings, passwords, etc) of the event source to check its status. Since environment variable keys could be duplicated (think of a sidecar container that also writes blobs, or queue messages, etc) then the behavior of keda might not be predictable in certain cases.
I think requiring a containerName if you have more than one makes the most sense.
Oh, now I get it, sorry for that!
Using containerName
is one option, but where does that leave your sidecar? If I go from 1 to 50 instances, I will only have 1 sidecar who might not be able to keep up.
What if we would:
containerName
)Or is there a way where we can decouple the connection (ie https://github.com/kedacore/keda/blob/master/examples/azurequeue_scaledobject.yaml#L17) from the deployment and assign it to the scaler so that we do not depend on the environment variable on the container itself?
Counter argument: Duplicate configuration for connection.
@tomkerkhove you cannot scale individual containers within a Pod - we always scale the entire pod with all its containers, as this is the way Kubernetes works.
So if you go to 1 to 50 instances, you will have 50 sidecars.
We cannot expect users to copy and paste all the env vars to all containers..
A simple containerName
property on the ScaledObject solves this problem as it tells KEDA which container is the one to scale by and look for its env vars and secrets.
@tomkerkhove you cannot scale individual containers within a Pod - we always scale the entire pod with all its containers, as this is the way Kubernetes works.
That's what I figured but did not get with the above so I'm happy that you verify this
A simple
containerName
property on the ScaledObject solves this problem as it tells KEDA which container is the one to scale by and look for its env vars and secrets.
Now I get it, the containerName
is to point to the container that has the required environment variables that KEDA needs to determine if the whole pod needs to scale or not. Gotcha. Sounds like a good idea imo.
Now I will shamefully start my weekend after making no sense in this thread, sorry guys.
From the code scale_handler.go:348 (b1ea9ca), multi-container is not supported by design. In my case it prevents me from adding fluend side-car container. I think it's pretty common use-case that is blocked by design choice.
Referenced envVars or secrets can be prefixed by container name for example or trigger configuration can be more explicit.