Vision-CAIR / MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
https://minigpt-4.github.io
BSD 3-Clause "New" or "Revised" License
25.14k stars 2.9k forks source link

Conda env create error on Mac M2 #8

Open DexterLagan opened 1 year ago

DexterLagan commented 1 year ago

MacBook Air M2

Steps to reproduce:

conda env create -f environment.yml Collecting package metadata (repodata.json): done Solving environment: failed

ResolvePackageNotFound:

TsuTikgiau commented 1 year ago

Thanks for your interest! Our code currently is only tested in Linux and I'm also not sure whether Mac M2 has a powerful enough GPU to run our model. But for the error you show, basically this is trying to install a cuda inside the new environment. So if you already has cuda installed in your machine, you can comment this out

iHaagcom commented 1 year ago

Unfortunately, I tried creating a requirments.txt file for windows to avoid conda and I got the same error.

DexterLagan commented 1 year ago

I'll find out how to install cudatoolkit on conda. Vicuna runs perfect on my M2 24GB, so I'm hoping MiniGPT will also run. I can't see the visual encoder taking that much more RAM, considering Vicuna runs on just 8GB of RAM or less. Cheers

DexterLagan commented 1 year ago

Was able to install nearly all packages into the conda env by chmodding 666 conda's environments.txt, and removing cudatoolkit from the environment.yaml. There should be a way to run this model on CPU only.

Arnold1 commented 1 year ago

@DexterLagan are you able to run training and inference of Vicuna and MiniGPT on m2 already? is it able to use the gpu on m2?

DexterLagan commented 1 year ago

@Arnold1 Yes I run most models on the M2 using LLama.cpp and Alpaca.cpp. I believe they run on the GPU when using LLama.cpp. Vicuña runs very fast and well. Alpaca 7B runs faster. Alpaca 30B runs slowly, but surely. My MacBook Air is 24GB, most models fit just fine. In fact I've had more success on Mac than on Windows to run models locally - since most repos focus on Linux and Mac somehow.

Arnold1 commented 1 year ago

@DexterLagan how you run it - what comands? I have a M1 with 64 GB ram...

DexterLagan commented 1 year ago

Follow the instructions on the LLama.cpp repo for Mac. The goal was to run on Mac in the first place. Cheers !

Coca-Cola1999 commented 1 year ago

[I want to run it in MacbookPro M1,but have a ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. After executing this command : python -m fastchat.model.apply_delta --base llama-7b-hf/ --target vicuna-7b/ --delta vicuna-7b-delta-v1.1/. Help me please!] my email:a18622563703@163.com

DexterLagan commented 1 year ago

@Coca-Cola1999 Make sure you search issues on the LLama.cpp repo, and post one if nothing matches. I haven't encountered this error. Cheers!

huma0605 commented 1 year ago

@DexterLagan thanks so much for sharing your experience! Could you also share how you connect Vicuna (running through Llama.cpp) with this MiniGPT4? I am running Llama.cpp Vicuna on my MacBook now, but still trying to figure out how to connect it to MiniGPT4. Some directions would be appreciated!

gino8080 commented 1 year ago

can we use docker to run it on Silicon ?

wacdev commented 1 year ago
image

see pull reqeust https://github.com/Vision-CAIR/MiniGPT-4/pull/174

I change max_new_tokens=3000, max_length=20000 in demo.py

then On the basis of this picture, create a detailed, wonderful, fascinating novel story

Generate a story as below :

In a land far, far away, there was a young woman named Li who lived in a small village on the outskirts of the city. She was known for her beauty and grace, and many men in the village had asked for her hand in marriage. However, Li had always declined, saying that she was not ready to settle down.

One day, while Li was out walking near the river, she saw a man standing on the bridge. He was dressed in a white robe with gold embroidery and held a fan in his hand. As she approached, he turned to her and bowed deeply.

“Hello, miss,” he said in a smooth, melodious voice. “My name is Tao, and I am a traveler. I have been traveling for many years and have seen many beautiful sights, but none as lovely as you.”

Li blushed at the compliment and lowered her eyes. “Thank you, sir,” she said softly. “But I am just a simple village girl. There is nothing special about me.”

Tao chuckled. “Oh, but there is, miss. You have a kind heart and a gentle spirit, and those are qualities that are rare and precious. I have never met anyone like you before.”

Li looked up at him, surprised. “You have never met anyone like me?” she asked.

Tao smiled. “No, miss. You are unique and special, and I would like to get to know you better.”

Li hesitated for a moment, wondering if she should trust this stranger. But something about him seemed kind and sincere, and she found herself wanting to believe him.

“Very well,” she said finally. “I will allow you to get to know me better. But you must promise to be respectful and kind.”

Tao bowed again. “I promise, miss. I will do my best to make you happy.”

And with that, the two of them began a journey of discovery and love that would take them to the farthest corners of the earth and beyond.


use gpt4 transalted into chinese is :

用gpt4润色后如下

在一个遥远的土地上,有一个名叫丽的年轻女子,她居住在城市边缘的一个小村庄。她因美貌和优雅而闻名于世,村里许多男子都向她求婚。然而,丽总是婉拒他们,称自己还未准备好安定下来。

一天,丽在河边散步时,看到一名男子站在桥上。他身着白色镶金的长袍,手持一把折扇。当丽走近时,他转身向她鞠躬行礼。

“小姐好,”他用柔和悦耳的声音说道,“我叫涛,是一名行者。我游历多年,目睹过世间诸多美景,但都不及你的美丽动人。”

丽听到这番赞美,脸颊泛红,羞涩地低下了头。“谢谢您,先生,”她轻声说,“但我只是一个普通的乡村姑娘,没有什么特别之处。”

涛笑了笑。“哦,但是有的,小姐。你拥有一颗善良的心和温柔的气质,这样的品质实属罕见且珍贵。我从未遇见过像你这样的人。”

丽抬起头,惊讶地看着他。“您从未遇到过像我这样的人?”她问道。

涛微笑着说:“是的,小姐。你独一无二、与众不同,我希望能更多地了解你。”

丽犹豫了一会儿,不知道是否该信任这个陌生人。但他身上似乎有种善意和真诚,她情不自禁地想要相信他。

“好吧,”她最终说道,“我愿意让你更了解我。但你必须承诺尊重我,对我好。”

涛再次鞠躬。“我承诺,小姐。我会竭尽所能让你幸福。”

就这样,他们开始了一段探索与爱情的旅程,这段旅程将带他们到地球的最远角落,乃至更遥远的地方。

image is created by stable-diffusion

12

CoruNethron commented 1 year ago

@wacdev Thank you for fixes. Tested on M1 16Gb, with max_new_tokens=150, max_length=1000 and all other yours changes. It can be executed. Performs very slow, because of excessive swap usage. Description of one image took 25 minutes in my case. Screenshot 2023-05-02 at 14 55 02

Trying to figure out, if mps device can be used to utilize GPU.

simongcc commented 11 months ago

@CoruNethron I have just tried to run in M1 64, not sure what is missing, after "Loading LLAMA", it stated the error

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
...

Is there anything missed during the installation? I am using conda created an environment dedicated for testing minigpt.

CoruNethron commented 11 months ago

@simongcc please check this fork https://github.com/wacfork/MiniGPT-4/tree/main by @wacdev , there was few changes, like device type (if CUDA device is not available). As far as I remember, I just took all the changes from that fork and also sightly changed max_new_tokens and max_length cause of low RAM in my system.

simongcc commented 11 months ago

@CoruNethron Thank you so much for shedding light on this! I will check it out and try.🙏🏽😃

update: It can run now. But when it run, it encounter a small error like this

/miniforge3/envs/minigpt4-arm/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: dlopen(/miniforge3/envs/minigpt4-arm/lib/python3.9/site-packages/torchvision/image.so, 0x0006): Symbol not found: (__ZN2at4_ops19empty_memory_format4callEN3c108ArrayRefIxEENS2_8optionalINS2_10ScalarTypeEEENS5_INS2_6LayoutEEENS5_INS2_6DeviceEEENS5_IbEENS5_INS2_12MemoryFormatEEE)

so that the resulted minigpt seems cannot get the image after uploading. Did you encounter the same error?