-
默认好像是只使用第一块显卡
-
If I want to work with multimodal LLMs that takes in a set of embedding from vision/audio encoders, what is the proper way of inputting them into a LLM running using exllamav2?
Can I just add a custo…
-
#### Update
Update on March 15, 2023 based on discussion here https://github.com/pydata/xarray/issues/7621
First posted on March 13, 2023
#### Description
When reading data with Xarray using a r…
-
Google Colab notebook has errors and even after fixing them the last cell throws cuda errors. Even copied to local installation the last cell breaks perhaps because of limited VRAM size.
-
### 🐛 Describe the bug
I'm implementing padding support directly on my LLM model. To do so, I add extra rows to the boolean attention mask with all `False` values.
However, calling `torch.nn.fun…
-
### News
- Conferences
- [CVPR 2023](https://cvpr2023.thecvf.com/)
- 일시/장소: 6. 18 - 22, Vancouver convention center
- Main and Expo: 20 - 22, Workshop and Tutorial: 18-19
- 국내 부스: L…
-
Hello,
After going through your data, you just labeled objects which has boxes. The background like sky or water are not labeled.
Therefore, I am curious how your data can be used for semantic s…
-
Post your questions here about: [“Language Learning with Large Language Models”](https://docs.google.com/document/d/1vCRoU_g9yYwG31uZMdAVK8iNL5Jj8BB4iwcvarTq06E/edit?usp=sharing) and “Digital Doubles …
-
Hello, I'm trying to understand how SAM works. I am interested in extracting the **image embeddings** created by **ImageEncoderViT**. Also, I'm interested in the output after combining _image embeddin…
-
Hello Meta GenAI team (cc @ruanslv),
With regards to the 70B model, I'm currently looking into the implementation of the GQA architecture -- specifically after noticing the 8192 x 1024 layer shapes…