microsoft / JARVIS

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
MIT License
23.55k stars 1.96k forks source link

So Mac can not use this? #39

Open iwoomi opened 1 year ago

iwoomi commented 1 year ago

Macs are not using NVIDIA display card, so Mac can not use this right?

ekiwi111 commented 1 year ago

I guess you still can, but using hybrid mode only. https://github.com/microsoft/JARVIS#configuration

xiebruce commented 1 year ago

I guess you still can, but using hybrid mode only. https://github.com/microsoft/JARVIS#configuration

But the server needs Nvidia display card.

image image
Fermain commented 1 year ago

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image
ethanye77 commented 1 year ago

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image

WX25660405-090513@2x I am also a Mac user and I encountered this issue while running this line of code. Could you please tell me what I should do if it is convenient?

ethanye77 commented 1 year ago

here‘s my issue

image
Fermain commented 1 year ago

The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.

ethanye77 commented 1 year ago

The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.

image

Thank you. Your solution is very helpful, but after downloading so many files, the progress is still 0%. Is this a normal situation?

ErikDombi commented 1 year ago

Yes, the LFS objects are rather large. My models folder is 275 GB personally.

davidjhwu commented 1 year ago

Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)

Fermain commented 1 year ago

Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)

No, you can run the lite.yaml configuration to use remote models only, although this is quite limited at the moment. I suggest using an external hard drive or SSD to manage these large models.

iwoomi commented 1 year ago

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

sirlaurie commented 1 year ago

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file

"midas-control": {sometmodel here}

you can run without nvidia device.

van0303 commented 1 year ago

I have just downloaded the models on my Mac, I don't have the N display card. And i have started with this models_server.py --config lite.yaml I got the error messages : AssertionError: Torch not compiled with CUDA enabled

comment the

"midas-control": {
            "model": MidasDetector(model_path=f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt")
            }

the models_server started

sirlaurie commented 1 year ago

did you run git lfs install?

xmagicwu commented 1 year ago

yes,git lfs installed

xmagicwu commented 1 year ago

the version is 3.3.0

sirlaurie commented 1 year ago

I mean after you installed git-lfs, you need run git lfs install first

if you did it already, run sh download.sh again

xmagicwu commented 1 year ago

Thanks, I'll try it

xiebruce commented 1 year ago

There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:

image

@Fermain @ethanye77 Did you encountered this error: https://github.com/microsoft/JARVIS/issues/67

Solving environment: failed with initial frozen solve. Retrying with flexible solve.

liujie316316 commented 1 year ago

Thanks, I'll try it

Hello, have you resolved this issue? I also reported the same error. image

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh

Fermain commented 1 year ago

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh

git-lfs is not a pip package. You can use homebrew to install it:

brew install git-lfs

The error message states that this is not installed.

liujie316316 commented 1 year ago

I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh

git-lfs is not a pip package. You can use homebrew to install it:

brew install git-lfs

The error message states that this is not installed.

OK,Thank you!

xmagicwu commented 1 year ago
image

My device is a mackbook M1, how to solve this problem?

Fermain commented 1 year ago

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

xmagicwu commented 1 year ago

How to use it restrictively?

Fermain commented 1 year ago

The readme contains instructions for using the model with the lite.yaml config file instead of the full config.yaml file. Add your API keys to this lite file, and run this instead of config.

sirlaurie commented 1 year ago
image

My device is a mackbook M1, how to solve this problem?

checkout my first post in this issue:

https://github.com/microsoft/JARVIS/issues/39#issuecomment-1499319851

you don't need to change config.yaml to lite.yaml

xmagicwu commented 1 year ago
图像

我的设备是mackbook M1,如何解决这个问题?

查看我在本期中的第一篇文章:

#39(评论)

你不需要config.yaml改成lite.yaml

image

Did it work successfully?

sirlaurie commented 1 year ago

it did

Fermain commented 1 year ago

@sirlaurie I missed that comment, very helpful - thanks

iwoomi commented 1 year ago

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here

device: cuda:0 # cuda:id or cpu

Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?

xmagicwu commented 1 year ago

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file

"midas-control": {sometmodel here}

you can run without nvidia device.

very helpful - thanks

But encountered another problem~ My hugginggpt not work~

image
sirlaurie commented 1 year ago

Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.

@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here

device: cuda:0 # cuda:id or cpu

Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?

looks like it's a newly added option, but unfortunately, still no

sirlaurie commented 1 year ago

@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml(that is inference_mode: huggingface) right? Because if we use inference_mode:local(or inference_mode:hybrid),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?

comment line 298-300(maybe if you didn't reformat this file) in models_server.py file "midas-control": {sometmodel here} you can run without nvidia device.

very helpful - thanks

But encountered another problem~ My hugginggpt not work~ image

check your network or your api quota

xmagicwu commented 1 year ago

thanks

image

How can the generated pictures be accessed?

xiebruce commented 1 year ago

thanks

image

How can the generated pictures be accessed?

This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!

xmagicwu commented 1 year ago

thanks

image

How can the generated pictures be accessed?

This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!

image

folder has been created

xmagicwu commented 1 year ago
image image

Why does the path for generating images keep changing?

xmagicwu commented 1 year ago
image

what's wrong?

iwoomi commented 1 year ago
image

what's wrong?

Weird, it's should not be like this, please backup you lite.yaml, and force update to the latest commit and try again.

sirlaurie commented 1 year ago

I think the latest commit has fixed this bug. just pull again

hx9111 commented 1 year ago

following command as recommended to use mps(m1, m2 ,max )

conda install pytorch torchvision torchaudio -c pytorch-nightly