-
### Describe the bug
In `extensions/multimodal/pipelines/llava/llava.py` line:54, it make `mm_projector` to the device, but didn't cast it type as setting.
To fix it, just add parameter `dtype=self.…
-
**運行環境**
Windows 11 Pro,conda 23.5.2
**顯示卡**
NVIDIA GeForce RTX 4060
**過程(復現)**
在安裝完所有組件後,於 Muice-Chatbot-main 啓動 Windows Terminal,依次執行` conda activate Muice` 和 `python main.py`,會產生` Assertio…
-
### System Info
```Shell
- `Accelerate` version: 0.18.0.dev0
- Platform: Linux-5.15.0-1033-aws-x86_64-with-glibc2.31
- Python version: 3.11.3
- Numpy version: 1.24.2
- PyTorch version (GPU?): 2.0…
-
File "F:\AI\RedditChatBot\nmt-chatbot-master\nmt\gnmt_model.py", line 262, in
class GNMTAttentionMultiCell(tf.nn.rnn_cell.MultiRNNCell):
AttributeError: module 'tensorflow_core._api.v2.nn' has …
-
![image](https://github.com/InternLM/xtuner/assets/137043350/34913d8e-baad-48de-8795-cd61caf6a716)
![image](https://github.com/InternLM/xtuner/assets/137043350/b519e889-d182-4bdb-b646-fcd3f41de1dd)
…
-
### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-1050-aws-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
…
-
## ENV
- NVIDIA-SMI 515.76 Driver Version: 515.76 CUDA Version: 11.7
- torch 2.1.0
- anaconda env
- Python 3.10.13
- [Followed the readme from a brand new env]…
-
Q1) any minimum requirement for running h2ogpt docker ?
should GPU have at least N GB ?
- got " torch.cuda.OutOfMemoryError: CUDA out of memory."
- at now , using GeForce RTX…
-
### Describe the bug
model loads but won't output anything.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
loaded llama 7b with 4bit
### Scre…
-
Dear All,
I'm running 30B in 4bit on my 4090 24 GB + Ryzen 7700X and 64GB ram
after generating some tokens asking to produce code I get out of memory errors
using --gpu-memory has no effects
ser…