Open gabbler97 opened 8 months ago
Hello Everyone! Any clue?
Hello @gabbler97
I am sorry but this timeout is not currently configurable.
I will add this request to our open issues backlog.
I would post your question about knative serving in the CNCF Serving Slack channel. You might get some help there.
In addition to a simple --timeout option, I would prefer we were able to detect that a new node is being allocated, and inform the user; auto-increasing the timeout.
Dear @lkingland , Thank you very much for your answer! :)
Hey @lkingland, I think to achieve this we can configure the K8 client, thereby initializing a watcher over the nodes, and look for the events. If a new worker node is allocated, then increasing the timeout.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen
. Mark the issue as
fresh by adding the comment /remove-lifecycle stale
.
I use Knative just as mentioned in the documentation:
https://knative.dev/docs/install/operator/knative-with-operators/ I installed Istio with Istioctl I have EKS 1.26 I use cluster autoscaler https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws I have one nodegroup without taints and another nodegroup which has taints (reserved-mynodes: true) When I am deploying my functions and there are not enough resources in the cluster
Sometimes I just got timeout after 120s.
That is clearly caused by cluster autoscaler. It takes 2-3 minutes to bring up a new worker node if there are not enough resources in the cluster. After I create the function with failed state and the new node is there I can retry to deploy my functions without any issue.
How I am able to increase the timeout? I found no --timeout flag or something like this. Should I find the solution by setting something in knative-eventing? Thank you very much in advance!