Open dgerd opened 5 years ago
To support PVCs, when scaling up/down apps, how could data be handled? Will knative take the work? Or app should handler data migration gracefully when scale up or scale down.
Would it be possible to add emptyDir
as an allowed type as that would not have state, should not present a scaling problem, and have rw storage outside the docker overlay. Also having emptyDir
and medium: Memory
will allow users to create larger tmpfs
mounts where write intensive operations can happen in memory and not disk, which could wear down SSDs on selfhosted instances and would be orders of magnitude faster. As the default size for /dev/shm
is only 64M
. One of the suggested workarounds for increasing that is actually using a emptyDir with medium Memory.
I suppose that can be solved by writing a custom mutatingadmissionwebhook.
I have a very valid use case using Odoo, which saves all generated attachments on a NFS when using multi-tenant deployments, and this is being used on an actual k8s deployment with istio.
Knative can make things easier for us, but we can't drop the NFS (that's not even a source of state for us). There should be someway to accomplish this. If its not an issue with k8s it shouldn't be a constraint using Knative. That NFS should not impact a Knative deployment at all.
@gustavovalverde Thanks for sharing your use-case. This is something that is on the radar of the API Working Group, but we do not have someone actively working on this right now.
The "Binding" pattern as talked about in https://docs.google.com/document/d/1t5WVrj2KQZ2u5s0LvIUtfHnSonBv5Vcv8Gl2k5NXrCQ/edit#heading=h.lnql658xmg9p could be a potential workaround to inject these into the deployment that Knative creates while we work on getting this issue resolved. See https://github.com/mattmoor/bindings for examples.
cc @mattmoor
@dgerd @mattmoor I'd really appreciate an example on how to use bindings for this use case. I'll test it and give the feedback here so others with the same restriction can use this workaround.
@dgerd and I spent some time discussing this idea before the holidays began. I think he wanted to try to PoC it. If not, then he and I should probably write up a design doc to capture our thoughts/discussion.
@mattmoor Do I read this correctly that i cannot use ReadWriteMany PVCs at all in a Knative service? I have a simple uploader service that needs to deposit data in an Azure files pvc volume. I understand the desire for statefulness but I don't see this as different from inserting data into a database. The "persistence" isn't in the pod in either case. Thanks for any insight. --jg
I don't think we've figured out how to allow this in a way that doesn't have pointy edges that folks will stick themselves on. I totally agree that the filesystem is a useful abstraction for write-many capable devices.
Bumping this issue because it something that most users I have met want to do.
I don't think we've figured out how to allow this in a way that doesn't have pointy edges that folks will stick themselves on.
True, but realistically we most likely will never be able to prevent users to shoot themselves in the foot. We have seen them bypass the limitations with WebHooks to inject sidecars, use the downward API and mount volumes anyway.
The binding pattern is really interesting but maybe too complicated for typical users who just want to have the Kn Pod Spec be 100% compatible with the k8s Pod Spec.
As an example to what JR said above, both Datadog and New Relic use Unix domain sockets to collect metrics and exposing that is going to be important to support customers using these systems. In case of Datadog, the predominant way of using it is to deploy it as a Daemonset to the cluster and have customers and utilize UDS to send metrics to the agent local to the node. Another alternate is to use host IP within the user code to send the metrics to the Daemonset, but in order to ensure that the metrics are sent to the host node and not a random node in the system, user has to use k8s downward API to feed the IP of the host to the revision, but that doesn't work either because we don't support k8s downward APIs.
Would love to get everyone's opinion on two things:
Can we extend the current list and support hostPath? While this could potentially have pointy edges, lack of this is going to be an adoption blocker for a large set of scenarios - especially ones that involve use of Deamonsets (very common in logging & monitoring scenarios).
Can build an extension point here and allow vendors to extend this default list with vendor specific additions? That way, Knative can still focus on a set of core scenarios and vendors will be responsible for supporting & maintaining their additions to this list.
True, but realistically we most likely will never be able to prevent users to shoot themselves in the foot. We have seen them bypass the limitations with WebHooks to inject sidecars, use the downward API and mount volumes anyway.
Yep, I agree. I think my prior comment is likely easily misinterpreted as "No, we need to solve this problem", but my intent was simply to convey that this isn't a slam dunk, there are downsides/gotchas that we'll have to be sure to clearly document.
The binding pattern is really interesting but maybe too complicated for typical users who just want to have the Kn Pod Spec be 100% compatible with the k8s Pod Spec.
The position I've been advocating for is actually to expand the surface of PodSpec that we allow to enable the binding pattern to target Service (as the subject) vs. forcing folks to reach around and use it with our Deployments. Sure if can be used to reach around us, but I agree that here it is inappropriate and overkill.
Can we extend the current list
I think we should absolutely expand the list, I have mixed feelings on hostPath (aka privilege), but we should discuss on a WG call. Especially with multiple container support coming the filesystem becomes an extremely interesting channel for intra-pod communication. The Google Cloud SQL proxy comes to mind 😉
I think at this point what we need is someone to drive the feature by putting together the appropriate feature track documentation and running it through the process.
Issues go stale after 90 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle stale
.
Stale issues rot after an additional 30 days of inactivity and eventually close.
If this issue is safe to close now please do so by adding the comment /close
.
Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.
/lifecycle stale
I think we still want this /remove-lifecycle stale
Yes, this could be behind a feature flag. I'll take a look after I add support for Downward API.
Hi, is there a workaround for this or is it a WIP?
A workaround is to use a Webhook to inject what you want in the Pod Spec. Not ideal. This is a WIP, but I don't think anyone is working on it right now.
@JRBANCEL I could have a look to this.
@JRBANCEL I could have a look to this.
Great. You can look a the various features behind feature flags for inspiration, for example: https://github.com/knative/serving/pull/8126
Thanks @JRBANCEL probably this needs an official design document/proposal. I will work on it.
/assign
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
It would be good to either document the principles here (e.g. avoid state storage and sharing between Pods as it's against the stateless design and tends to lead to awkward failure and scaling modes), and/or to make this a flag-guarded "default are safe, but you can unlock the hood and reach into the running engine if you must" list.
/triage accepted
I have a use case where we're looking to use Knative to facilitate autoscaling of machine learning services that load large artifacts on demand. To illustrate, services that look something like the TensorFlow embedding projector https://projector.tensorflow.org/ with a large embedding preloaded.
The k8s pod spec pattern we are currently using is an initContainer that copies artifacts from a PVC into emptyDir for the main container to then use. This allows relatively fast loading of these large (~1gb) artifacts compared to, for example, downloading from S3 on start up every time.
I was hoping to use Knative to allow pod autoscaling of these services, as various expensive machine types (eg. gpu's, high ram instances) are required, and having an instance of the service running for every combination of the artifacts is unfeasible.
Is an artifact loading + autoscaling use case like this out of scope for Knative?
Also, are there any further resources for the suggested workarounds? The Google Doc here is private.
The "Binding" pattern as talked about in https://docs.google.com/document/d/1t5WVrj2KQZ2u5s0LvIUtfHnSonBv5Vcv8Gl2k5NXrCQ/edit#heading=h.lnql658xmg9p could be a potential workaround to inject these into the deployment that Knative creates while we work on getting this issue resolved. See https://github.com/mattmoor/bindings for examples.
A workaround is to use a Webhook to inject what you want in the Pod Spec. Not ideal. This is a WIP, but I don't think anyone is working on it right now.
Edit: Is there an example for this pattern?
https://knative.tips/pod-config/volumes/ As a workaround for using other storage volumes, you can write native Kubernetes apps where you can mount such volumes and call them from Knative apps.
Similar to @yovizzle, I have a use case where we would like to use an init container to download static (but changing over time) files to a Pod on deployment, and emptyDir
would be perfect for this. Otherwise I would also appreciate some documentation for how to use the mentioned workarounds.
hi @7adietri - just out of interest to understand the use case fully, what are the primary reasons to want to do this with an initContainer and emptyDir rather than having the main user container download the files on startup before responding to requests?
@julz Separation of concerns, mostly. The init container is using a cloud provider image and runs the provider's tool for downloading files from a bucket. The service/main container doesn't need to know about any of this, the files are "just there".
I have a concern around lifetimes and downloading content at init -- if the content changes, you could end up with a mix of content for an unknown duration as serving uses a mix of old and new Pods to handle requests until scaled down.
If there was a way to run the cloud provider image as a continuous sidecar, that would mitigate a lot of my concerns (rollback would still be harder, because there would be two different places to look).
@evankanderson In our case the URL changes with each content update and is part of the deployment manifest, so each content change causes a new deployment. Using different versions of the files until all Pods have been replaced is fine for us, and would probably be the same if they were continously downloaded into running Pods.
I have the same approach as @yovizzle. I am trying to keep machine learning model weights in separate container outside of serving container. By using initContainers I am copying the new model weights from the weights container to the unchanged serving container during the container startup through emptyDir volume mounts. In order to use knative without emptyDir I need to push my container with several gigabytes of weights to the registry as single container.
We also need the emptyDir support in order to make use of Knative. We're doing transformations of huge data volumes (potentially some GBs) where the intermediate results are stored in an embedded/local H2 database and retrieved via SQL. Above a certain H2 cache size these results are stored on disc.
We want to introduce autoscaling via Knative and our application seems to fit the requirements: These transformations are stateless and independent from each other, the cache is no longer used after the transformation finished. From my point of view emptyDir volumes look consistent to the Knative approach, I really hope they'll get implemented soon!
Same here. I have an app that already is on Knative, and now need to move away from it just because I need an emptyDir mount. Roughly the same use-case as @7adietri - it's a swagger-ui showing a merged view of OpenAPI spec files. Rarely accessed, but when it is should be up2date. When starting, I fetch the swagger sources and merge them into one using swagger-cli & then serve the merged file. Shutting down after 30 minutes without requests. I'm now making this a k8s deployment and need to run it the whole time. A few extra seconds on startup would totally be acceptable there.
For the deployment I need to hand-craft a way to restart it from time to time to get updated specs, which I could totally skip otherwise.
Hi, I am back on this, I am writing the feature track, will update.
The ft doc for this can be found here, feel free to add comments (added in the knative team drive). It is a draft version to get the discussion going and hopefully move things forward.
Some thoughts
1) emptyDir as a scratch space makes sense - especially since it's tied to the pod's lifecycle 2) I'm not really convinced downloading models (via an initContainer) into an emptyDir really makes a lot of sense
To expand on 2) it seems really inefficient and you're not benefiting from any caching. Any additional pods that are scaled up (even on the same node) will always download the model? Maybe the better pattern is to bundle them in an OCI image and access them as a sidecar. Then subsequent starts would be much faster since these model images will be cached by Kubernetes
Any additional pods that are scaled up (even on the same node) will always download the model?
Yes, but this is not a major concern for us, our Pods are relatively long-lived.
Maybe the better pattern is to bundle them in an OCI image and access them as a sidecar.
Maybe, but emptyDir
is easy to understand and use, whereas I haven't even heard of OCI images before. 😬
whereas I haven't even heard of OCI images before.
OCI is the standard for container images that run in Docker, Kubernetes, etc.
@dprotaso using init containers to get models is an approach already used by Seldon (MLOPS) and others. Check here and here. So it is something not new in that domain.
Seldon Core uses Init Containers to download model binaries for the prepackaged model servers.
They also support autoscaling via HPA and KEDA. KFServing (Kubeflow) does the same: https://github.com/kubeflow/kfserving/issues/849. This does not mean this is the only way or the best in every occasion. Actually its pretty common also to deploy a model bundled together with a server which exposes some API for predictions. Caching is one benefit as you mentioned and that means shorter scaling times. However not all envs allow or desire to pre-package everything for example for data governance reasons etc. Also if the model is not that big it should not be an issue. In addition some times people want to experiment and that could be much easier than building images so from a UX perspective it is also acceptable.
@dprotaso I will update the feature track based on latest comments and will start implementation asap if there are no objections. Some basic scenarios like emptyDir support should be there imho.
any update on this? I have a use case to increase shm-size when deploying Triton Inference Server on GKE using Kserve. The only solution I could find online was this
But this gives knative error
Warning InternalError 14s (x2 over 29s) v1beta1Controllers fails to reconcile predictor: admission webhook "validation.webhook.serving.knative.dev" denied the request: validation failed: expected exactly one, got neither: spec.template.spec.volumes[0].configMap, spec.template.spec.volumes[0].projected, spec.template.spec.volumes[0].secret
@swapkh91 Support for emptyDir
volumes is available, behind a feature flag.
@swapkh91 Support for
emptyDir
volumes is available, behind a feature flag.
@7adietri yeah I tried that by enabling it in the configmap Then changed my yaml as per this The above error was resolved but now I'm getting the below error in Triton logs:
'boost::interprocess::interprocess_exception' what(): Read-only file system
Can't find anything to resolve this
mountPath
has to be /cache
as per Knative or anything should work?
@swapkh91 I'm sorry, I don't know what the issue is on your particular setup beyond enabling emptyDir
support. (And this issue is probably not the appropriate place. But Read-only file system
seems worth investigating.)
@swapkh91 I'm sorry, I don't know what the issue is on your particular setup beyond enabling
emptyDir
support. (And this issue is probably not the appropriate place. ButRead-only file system
seems worth investigating.)
@7adietri yeah looking into it. Anyway, thanks for the reply
Hi @swapkh91!
mountPath has to be /cache as per Knative or anything should work?
It can be anything.
Hi @swapkh91!
mountPath has to be /cache as per Knative or anything should work?
It can be anything.
@skonto thanks for the confirmation. The problem I'm still facing is the read-only
thing as I've stated above. Not sure if its coming from Knative side or Kserve
Probably its the triton container? I see a similar bug here, also here things seem to be read-only. Could you try test a trivial service and compare with triton:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: emptydir
namespace: default
spec:
template:
spec:
containers:
- imagePullPolicy: Always
image: docker.io/skonto/emptydir
volumeMounts:
- name: data
mountPath: /data
env:
- name: DATA_PATH
value: /data
volumes:
- name: data
emptyDir: {}
kubectl describe pods
in my setup shows (rw):
...
Environment:
DATA_PATH: /data
PORT: 8080
K_REVISION: emptydir-00001
K_CONFIGURATION: emptydir
K_SERVICE: emptydir
Mounts:
/data from data (rw)
What do you see at your side? The triton error happens at runtime I suppose.
Probably its the triton container? I see a similar bug here, also here things seem to be read-only. Could you try test a trivial service and compare with triton:
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: emptydir namespace: default spec: template: spec: containers: - imagePullPolicy: Always image: docker.io/skonto/emptydir volumeMounts: - name: data mountPath: /data env: - name: DATA_PATH value: /data volumes: - name: data emptyDir: {}
kubectl describe pods
in my setup shows (rw):... Environment: DATA_PATH: /data PORT: 8080 K_REVISION: emptydir-00001 K_CONFIGURATION: emptydir K_SERVICE: emptydir Mounts: /data from data (rw)
What do you see at your side? The triton error happens at runtime I suppose.
@skonto yeah it works for a trivial service
I modified the yaml a bit and still it worked, to see if /dev/shm
is accessible
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: emptydir
namespace: default
spec:
template:
spec:
containers:
- imagePullPolicy: Always
image: docker.io/skonto/emptydir
volumeMounts:
- name: dshm
mountPath: /dev/shm
env:
- name: DATA_PATH
value: /dev/shm
volumes:
- name: dshm
emptyDir:
medium: Memory
In the pod description I can see
Environment:
DATA_PATH: /dev/shm
PORT: 8080
K_REVISION: emptydir-00001
K_CONFIGURATION: emptydir
K_SERVICE: emptydir
Mounts:
/dev/shm from dshm (rw)
Then I guess its from Kserve side! Also, Kserve currently uses 0.23.2 version of Knative (as per quick_install script), which is also incompatible for using emptyDir
, I changed it to current version for my case.
My case is to increase the shm-size, which normally can be done in docker run --shm-size=1g ...
. The solution online to do it in Kubernetes suggests using emptyDir
, link
Hi @skonto, in your comment here you said:
Hi rw access is allowed for emptydir only at the moment but full rw pvc support is coming. @Phelan164 the code you mention excludes emptydir volumes. So if emptyDir is used you should get write access unless something else is wrong in the given setup (not K8s spec related). All other volume types will remain in read-only mode unless explicitly defined otherwise. There are thoughts though to allow completely the K8s pod spec using some flag to enable this on demand eg. for advanced users etc.
I would like to know if read and write access is currently supported for PVCs in an InferenceService?
With containers
allowing more than one containers, it makes sense to allow emptyDir
with medium: Memory
for these multiple containers to exchange data (e.g. logs) via this volume.
Bumping, this is a feature also need for my project! would be very useful
In what area(s)?
/area API
Describe the feature
We currently only allow for configmap and secret volumes to be mounted into the user container. This constraint is put in place as volumes are a source of state and can severely limit scaling. This feature request is to relax this constraint to allow a larger set of volumes to be mounted that work well with serverless functions.
I do not have a particular set of volume types in mind, but #4130 may be a good example.