-
I am using the following method for openchat response generation
```
model_open_chat = AutoModelForCausalLM.from_pretrained("openchat/openchat_3.5",
…
-
Hello,
I have noticed that the interface returns the same generations independently of the number of responses requested (n > 1). Easy reproduction:
```python
from easyllm.clients import huggi…
-
**Describe the bug**
Error appears in the cmd window whenever starting to run any generation while magic prompts is turned on
stable-diffusion-webui\venv\lib\site-packages\transformers\pipelines…
-
I've been looking into why I've been having occasional very slow text generations taking on the order of several minutes to produce output (while most generations take around 40-60 seconds).
I've n…
-
**Is your feature request related to a problem? Please describe.**
We are developing several chatbot-like applications that require streaming the response from LLM. There are a couple of metrics to l…
-
### Motivation
I propose to add `input_embeds` as an optional input to the generation params.
# Why is this important
Nowadays there are a lot of Vision Language Models (VLMs) and they all h…
-
Hi, I am trying to reproduce your evaluation on _"ade20k_panoptic_val"_ dataset, but there is issue with the visual prompts generation. The script is looking for _anns[ann['id']] = ann_ and giving _Ke…
-
I have been following your [tutorial ](https://maartengrootendorst.substack.com/p/topic-modeling-with-llama-2)on how to use llama to get better topic names.
The only difference between yours and mi…
-
Can I identify and analyze videos? How to input video? Do you have any examples,How much GPU is needed to run
-
默认版本:目前,Maker默认选择的是6.1版本。若用户可以选择自己偏好的版本并在会话中记住该选择,将极大提升用户体验。
UI改进:左侧面板(侧边栏)可以更具视觉吸引力且更便于使用。增强其布局和设计将有助于用户更高效地浏览。
MidJourney Explore功能:如果可能,添加MidJourney Explore功能,使用户可以直接从应用内浏览midjourney.com上的所有公…