Azure / LogicAppsUX

https://learn.microsoft.com/azure/logic-apps
MIT License
77 stars 84 forks source link

Workflow assistant either refuses to process due to token limit or interprets instruction as being off-topic #5857

Open onyx-lemur opened 1 month ago

onyx-lemur commented 1 month ago

Describe the Bug with repro steps

See attached image - asked one question about a variable in the WF and received: 'The workflow run exceeded the maximum number of tokens allowed for a single run assistant workflow logic apps'.

Asked a follow up question asking the assistant to describe the workflow (one of your example questions) and was prompted that I was unable to stop the assistant from ignoring previous instructions and to continue the conversation.

What type of Logic App Is this happening in?

Standard (Portal)

Which operating system are you using?

Windows

Are you using new designer or old designer

New Designer

Did you refer to the TSG before filing this issue? https://aka.ms/lauxtsg

Yes

Workflow JSON

No response

Screenshots or Videos

Image

Browser

Chrome

Additional context

N/A

Eric-B-Wu commented 1 month ago

This is likely happening on a larger workflow where we pass the whole json back which is causing this issue, we'll look into 1. increasing token limit 2. Sending back a condensed version of of the json

Elaina-Lee commented 1 month ago

This is likely happening on a larger workflow where we pass the whole json back which is causing this issue, we'll look into 1. increasing token limit 2. Sending back a condensed version of of the json

This being said, we are unlikely to make those changes in the near future. We are unsure when we would be coming back to the issue to support the large workflow limitation.

onyx-lemur commented 1 month ago

What counts as a large workflow? This is a pretty minor one - just picks up one file and FTPs it into another location.

Also doesn't explain the misinterpretation of the second question about ignoring previous instructions. Seems like the underlying prompting is off or the LLM model performance is not good enough.

Obviously, this isn't critical to any work we're doing - but this feels like this has pushed to be released too early. Hope this customer feedback can be used to give the devs more time to work on this? Would be a really amazing feature once up and running.