-
Hello,
I run minigpt with 13b 16bit on A6000, it has 48 GB Gpu ram. Loading model uses about 36 GB GPU ram and when i run some inferences gpu memory usage is almost % 100 and got some errors. Is t…
-
Thanks for compiling such a outstanding 3D-LLM list. Thanks for your efforts in maintaining this repository.
We have two related works that we hope to add to your awesome repository~
**MiniGPT-3…
-
Thanks for making this code available. While trying to run the prepare_data_finetune scripts, getting LORA related import errors:
from .lora import lora, mark_only_lora_as_trainable, lora_state_dict…
-
hello, have you changed the parameter size of the llama_proj module?
![image](https://user-images.githubusercontent.com/28759055/233894506-5dbf7a80-557d-4ca1-afaa-77e15dce4877.png)
could you do…
-
Does using a diffusion model in a language model increase the generality of the language model?
-
mac Apple M1 Pro 32 GB
Vision-CAIR/MiniGPT-4/minigpt4/models/eva_vit.py
self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
num_patches = 256
embed_dim = 1408
but I …
-
Initializing Chat
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| …
-
what i have observed while running the pre trained model making detection . I was not able to make multiple detection say i want to detect a car but it will only detect one car not all the cars ?
-…
-
I have two 4090 24GB, if possible please provide an extra argument to demo.py to either load the model on CPU or 2 or more GPU and another argument to run on 16-bit and take advantage of extra GPU RAM…
-
(minigpt4) E:\MiniGPT-4-main>python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
Initializing Chat
Downloading (…)solve/main/vocab.txt: 100%|█████████████████████████████████████…