-
### 问题类型 / Type of issues
* 其它 / other
----
这是继 #3181 之后的又一次大扫除。请各位按列表检查自己名下的包,如果有意继续维护就勾选掉,如果不想管了、不记得了、好像没什么用了之类的,就留在那里。一个月之后还没有被勾选的包,我会发起 orphaning 流程。记得可以直接编辑留言,而不必用鼠标逐个点击。
包名后边的数字,「⬇️」…
-
Thanks for the project, I have managed to run the project on CPU with decent speed (**6.2 - 6.8 tokens per second**), however, the
the model only generates a small piece of content, and the response…
-
### System Info
Langchain version -- 0.0.277
python verion -- 3.8.8
window platform
### Who can help?
@hwchase17
@agola11
Hi I am not able to run any of the langchain syntex on my windows l…
-
### Question
I used the released projector liuhaotian/LLaVA-Pretrained-Projectors/**LLaVA-7b-pretrain-projector-v1-1-LCS-558K-blip_caption.bin** and the original Vicuna 7B V1.1 model (with applied …
-
Hi, Dr. Jian:
Thanks for this video repo. I tried to reproduce the report result but still have two problems:
1. In "lavis/projects/blip2/train/caption_vatex_stage1.yaml", I gave the param…
-
Tried this: https://github.com/tomaarsen/attention_sinks/issues/1#issuecomment-1745792500
Idea in below repro is to use longer context and still continue to generate outside normal context size. A…
-
Hi,
I'm using the following code:
```
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model =…
-
Hi,
Great work! I just got a quick question: what does the "Cross Attention" mean in your figure 3 in your paper?
Does it means it inputs the cross attention maps or values into latter process?
…
-
## Progress
- [x] Spike feasibility of autogenerated schema (https://github.com/BohemianCoding/Sketch/issues/24735#issuecomment-525272025)
- [x] MVP JSON Schema File Format (https://github.com/Boh…
-
Thanks for your help
## Description
When I run my Jupiter Lab with imports from fastbook, fastai, the "pylsp" process runs at 100% CPU and eventually, after several minutes hits RAM limit.
…