Closed dillfrescott closed 7 months ago
Would this limit apply to every workspace and chat sent? If so that can be configured for LocalAI LLM inferencing.
Is the chat running away and not terminating output?
I added it globally right around here actually:
So I guess that works for me!
I need this feature!