-
### What happened?
When using LiteLLM Proxy with streaming often (around 20% of the time) the response gets cut off. The model was going to use a tool in that response, but it was cut off before th…
-
I wrote a small dummy sub-generator that just prints out its arguments.
If I call it using the help flag (`yo efs:action -h`), I get the following output:
```
Usage:
yo efs:action [options]
…
-
Hey I recently update `get-site-urls` and patched some bugs. Let me know if you have any issues!
-
**Describe the bug**
The error message is displayed in the start of generation. Only one image is generated. There is no processing of prompt special characters like `{, }, |`, they are interpretated…
-
Hi, I'd like to have a few questions on the workflow combining [llm-recipes](https://github.com/Nicolas-BZRD/llm-recipes) with [llm-distillation](https://github.com/Nicolas-BZRD/llm-distillation) to c…
-
E.g. making the Euclidean Rhythm prompt a sub-prompt of channel settings, or making the Custom Theme prompt a part of Set Theme. This would help with the UI bloat that's been plaguing UltraBox recentl…
-
Hi, I want to to use ptpython with pypy3. I also can install and use it, but the tests of python-prompt-toolkit do not run:
```
* pypy3: running distutils-r1_run_phase python_test
================…
-
When I convert epub ebook to audiobook using v0.4.3 version, it works fine. In order to use edge tts, I tried to use v0.5.0 or v0.5.1 to convert the same epub e-book, and the error occurred as follows…
-
If the llama3 from ollama is running on http://8.140.18.**:28275, the following code from 60th example runs fine.
```
from txtai.pipeline import LLM
llm = LLM("ollama/llama3", method="litellm", a…
-
### Your question
After the prompt API submits the task, the WS listens for the task status and calls the history/prompt_id operation to obtain the response status. However, no image is returned in t…