Open wynandjordaan opened 3 years ago
I don't think there's an option to stop the splitting, but you can modify the output YAML files to put them back into a single deployment.
@ahmelsayed Is it still required to split apps into http/non-http when deploying to Kubernetes?
@wynandjordaan Is there an issue on the Durable extension repo that we can reference? /cc @cgillum @ConnorMcMahon
Hi Anthony,
Thanks for the reply so far. I have added the link to the Durable extension repo: https://github.com/Azure/azure-functions-durable-extension/issues/1600.
I do think that this is a big issue for Durable functions. The workaround is to split the http and non-http functions in code, which has quite a few drawbacks. The cleanest solution would be to prevent the splitting in the first place.
Editing the yaml file in the devops pipiline is also not an easy task as the order of things might change with a next release on func cli.
Facing the same issue when deploying on kubernetes with KEDA. I have three functions activity, orchestrator and http-starter. Getting the same issue with both JS and Python runtimes.
---> Microsoft.Azure.WebJobs.Script.Workers.Rpc.RpcException: Result: Failure Exception: Error: The operation failed with an unexpected status code: 400. Details: {"Message":"One or more of the arguments submitted is incorrect","ExceptionMessage":"The function 'hello-orchestrator' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered!","ExceptionType":"System.ArgumentException","StackTrace":" at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableTaskExtension.ThrowIfFunctionDoesNotExist(String name, FunctionType functionType) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\DurableTaskExtension.cs:line 1036\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableClient.Microsoft.Azure.WebJobs.Extensions.DurableTask.IDurableOrchestrationClient.StartNewAsync[T](String orchestratorFunctionName, String instanceId, T input) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\ContextImplementations\\DurableClient.cs:line 121\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.HandleStartOrchestratorRequestAsync(HttpRequestMessage request, String functionName, String instanceId) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\HttpApiHandler.cs:line 698\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.HandleRequestAsync(HttpRequestMessage request) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\HttpApiHandler.cs:line 235"} Stack: Error: The operation failed with an unexpected status code: 400. Details: {"Message":"One or more of the arguments submitted is incorrect","ExceptionMessage":"The function 'hello-orchestrator' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered!","ExceptionType":"System.ArgumentException","StackTrace":" at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableTaskExtension.ThrowIfFunctionDoesNotExist(String name, FunctionType functionType) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\DurableTaskExtension.cs:line 1036\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableClient.Microsoft.Azure.WebJobs.Extensions.DurableTask.IDurableOrchestrationClient.StartNewAsync[T](String orchestratorFunctionName, String instanceId, T input) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\ContextImplementations\\DurableClient.cs:line 121\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.HandleStartOrchestratorRequestAsync(HttpRequestMessage request, String functionName, String instanceId) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\HttpApiHandler.cs:line 698\n at Microsoft.Azure.WebJobs.Extensions.DurableTask.HttpApiHandler.HandleRequestAsync(HttpRequestMessage request) in D:\\a\\r1\\a\\azure-functions-durable-extension\\src\\WebJobs.Extensions.DurableTask\\HttpApiHandler.cs:line 235"}
@anthonychu all functions are deployed separately
I think @cgillum and @ConnorMcMahon are the best to speak to how to best address this.
@cgillum, @ConnorMcMahon Any updates on a solution to this? I am having the same problem.
Sorry, somehow I didn't see the notification for this issue (I came here for another reason and stumbled upon this issue).
I've been running Durable Functions with KEDA and Kubernetes over the last few days and have not encountered this problem, though I'm not entirely sure why. Which version of the Durable extension are you using? My tests are using v2.4.1, but I'm also using .NET which means I can more easily pick the version of the extension I'm using. Python and JS apps using bundles might be using older versions of the extension.
@cgillum, So the problem here is when you use func kubernetes deploy
command. It creates 2 deployments and 2 different pods that can't see each other. One for http triggers and the other for activity functions, so If you have a httptrigger
that tries to start an orchestration it won't exist.
Also on a slightly different note, but related, the separate deployment with the activity functions creates a scaledobject
something like this:
spec:
scaleTargetRef:
deploymentName: my-activities-api
pollingInterval: 3
triggers:
- type: activitytrigger
metadata:
type: activityTrigger
name: start-my-activity
Another problem is there is no scaledobject
scaler
in KEDA for activityTriggers
. Is this a known issue that is getting resolved? I can't find anything about this. The ability to scale activityTriggers
is a must for us as we use fanout type orchestrations and need certain activities to run on specific node pools E.g. GPUs. If we just put everything on queues it sort of defeats the object of using azure durable functions especially with the 64kb queue item limit, we might as well just us RabbitMQ in the cluster, as this will be much faster.
Right, I'm aware of the two separate pod problem. That's why I mentioned the version of the extension. Strangely, I didn't run into this problem when using func kubernetes deploy
when I was testing last week.
BTW, I looked through the code just now, and I think there is a workaround where you can add "externalClient": true
to your binding description in function.json to disable this validation.
The fact that there is no ScaledObject for Durable Functions triggers is a known issue. We recently added this for Durable Function apps that use our (yet to be announced) SQL backend (https://github.com/Azure/azure-functions-core-tools/pull/2503), but haven't done any work for this in the existing production scenarios. I think the idea is that you'd need to configure an external scaler. @TsuyoshiUshio I believe you worked on an external scaler for Durable Functions? Can you point to any documentation for this?
For those working on Non C# Durable Function Apps, such as Python or JavaScript function apps, here is an example of using the externalClient
setting in your HTTP-starter's function.json to enable your HTTP and non-HTTP deployments to reach each other:
https://github.com/microsoft/durabletask-mssql/issues/41#issuecomment-885299181
{
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in",
"route": "orchestrators/{functionName}",
"methods": ["post"]
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"name": "starter",
"type": "durableClient",
"direction": "in",
"externalClient": true
}
]
}
Setting externalClient
to true
in your HTTP-starter's function.json will disable the local check for the orchestrator you are trying to trigger, and will allow the requested orchestrator to be scheduled.
We recently ran into an issue where moving our Durable functions to a Kubernetes Cluster. We are using the "func kubernetes deploy" to generate the Yml file.
By default the Queue based functions are split from the HTTP triggered functions but they share the same Secrets. Because of the Orchestrator function being disabled the HTTP function cannot start a new instance of the orchestrator with this error:
Error: "The function 'RoadsideBatteryJobWorkflow' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered!" "Asgard.Odin.Workflows.RoadsideBatteryJob.Triggers.StartJobWorkflow" System.ArgumentException: The function 'RoadsideBatteryJobWorkflow' doesn't exist, is disabled, or is not an orchestrator function. Additional info: No orchestrator functions are currently registered! at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableTaskExtension.ThrowIfFunctionDoesNotExist(String name, FunctionType functionType) in D:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\DurableTaskExtension.cs:line 1062 at Microsoft.Azure.WebJobs.Extensions.DurableTask.DurableClient.Microsoft.Azure.WebJobs.Extensions.DurableTask.IDurableOrchestrationClient.StartNewAsync[T](String orchestratorFunctionName, String instanceId, T input) in D:\a\r1\a\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\ContextImplementations\DurableClient.cs:line 140 at Asgard.Odin.Workflows.RoadsideBatteryJob.Triggers.StartJobWorkflow.Run(HttpRequestMessage request, String instanceId, IDurableClient orchestrationClient) in /src/dotnet-function-app/Asgard.Odin.Workflows.RoadsideBatteryJob/Triggers/StartJobWorkflow.cs:line 59
I have spoken to the guys at the Durable Function Extensions and the workarounds for this is to have different configuration for each of these two deployments in Kubernetes as the AzureStorageWebJobs setting has to be different.
The question is that is it possible to stop the splitting from happening in the Yml file? Or is there another way to handle this either by Environment Variables in the Yml file?
Thanks in advance