-
Hi, I am building a yeoman generator in a way that suits the UI running in VS (Great tool by the way), I have a master app which has one prompt in it which askes what client (sub-generator) you whish …
-
### What happened?
This one is really hard to reproduce. For prompts that take several minutes while streaming, sometimes LiteLLM seems to disconnect from Azure. Not sure why.. leaving this ticket op…
-
I don't want a 512x512 resolution result, but want to set the resolution. In the source code below, I set the width to 512 and the height to 768. However, the result is a 512x512 resolution image.
…
-
@dosu I've gone through the library and first of all, the prompt needs to have something that instructs to go through the context, rather than just providing in few shot.
Second, I streamed the quest…
-
I was able to benchmark llama 2 7B chat (int 8) and was able to get ~600 tokens in about 12s on an A100 GPU whereas the HF pipeline takes about 25s for the same input and params.
However, when I t…
-
Hello,
This may be a non-important warning/error but I don't recall seeing it in the past so I wanted to mention. When generating images in TXT2IMG or IMG2IMG w/o a prompt I'm getting seeing the fo…
CCpt5 updated
11 months ago
-
### Describe the bug
I follow this Imagic_Stable_Diffusion.ipynb, however, i got exception of:
TypeError: __call__() got an unexpected keyword argument 'text_embeddings'
when i run:
with autoca…
-
https://github.com/js-jslog/harmonica-degree-map/commit/f7b62537386fd7e2672ae4ad591fa28c3afcf2a6
The above commit adds some basic typescript functionality to the template produced by the yeoman gen…
-
### Bug Description
This issue arises when streaming=True is selected, causing the LLM's return value to be a generator, which cannot be pickle. The return value of the get_chat_result function in mo…
-
# Steps to Reproduce
from babel.lists import format_list
format_list(['A', 'B', 'C'], style='standard-narrow', locale='de')
# Actual Results
Traceback (most recent call last):
…