Open iwoomi opened 1 year ago
I guess you still can, but using hybrid
mode only. https://github.com/microsoft/JARVIS#configuration
I guess you still can, but using
hybrid
mode only. https://github.com/microsoft/JARVIS#configuration
But the server needs Nvidia display card.
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
I am also a Mac user and I encountered this issue while running this line of code. Could you please tell me what I should do if it is convenient?
here‘s my issue
The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
The answer to your issue is on line 3 of your screenshot. Install git-lfs and try the model download step again.
Thank you. Your solution is very helpful, but after downloading so many files, the progress is still 0%. Is this a normal situation?
Yes, the LFS objects are rather large. My models folder is 275 GB personally.
Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
Are the LFS objects absolutely necessary? Tryna run this on my macbook air lol (16gb ram, 500gb ssd)
No, you can run the lite.yaml
configuration to use remote models only, although this is quite limited at the moment. I suggest using an external hard drive or SSD to manage these large models.
@Fermain So If we deploy JARVIS in macOS, we can only use the lite.yaml
(that is inference_mode: huggingface
) right? Because if we use inference_mode:local
(or inference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?
comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.
I have just downloaded the models on my Mac, I don't have the N display card.
And i have started with this models_server.py --config lite.yaml
I got the error messages :
AssertionError: Torch not compiled with CUDA enabled
comment the
"midas-control": {
"model": MidasDetector(model_path=f"{local_fold}/lllyasviel/ControlNet/annotator/ckpts/dpt_hybrid-midas-501f0c75.pt")
}
the models_server started
did you run git lfs install
?
yes,git lfs installed
the version is 3.3.0
I mean after you installed git-lfs, you need run git lfs install
first
if you did it already, run sh download.sh
again
Thanks, I'll try it
There are various ways to configure this package depending on your resource limitations. I am using it on a Mac right now:
@Fermain @ethanye77 Did you encountered this error: https://github.com/microsoft/JARVIS/issues/67
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Thanks, I'll try it
Hello, have you resolved this issue? I also reported the same error.
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfs
The error message states that this is not installed.
I executed the following command but still reported an error: pip install git-lfs cd models sh download.sh
git-lfs is not a pip package. You can use homebrew to install it:
brew install git-lfs
The error message states that this is not installed.
OK,Thank you!
My device is a mackbook M1, how to solve this problem?
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
How to use it restrictively?
The readme contains instructions for using the model with the lite.yaml
config file instead of the full config.yaml
file. Add your API keys to this lite file, and run this instead of config
.
My device is a mackbook M1, how to solve this problem?
checkout my first post in this issue:
https://github.com/microsoft/JARVIS/issues/39#issuecomment-1499319851
you don't need to change config.yaml
to lite.yaml
我的设备是mackbook M1,如何解决这个问题?
查看我在本期中的第一篇文章:
你不需要
config.yaml
改成lite.yaml
Did it work successfully?
it did
@sirlaurie I missed that comment, very helpful - thanks
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here
device: cuda:0 # cuda:id or cpu
Do it mean that if I set the device to "cpu", then I can run the server on inference_mode =local
on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.
very helpful - thanks
But encountered another problem~ My hugginggpt not work~
Without Nvidia hardware, there is no solution to this particular issue. This system is not designed to run on Apple hardware and can only be used in limited ways on this platform.
@sirlaurie @Fermain I notice that we can config the device to "cuda" or "cpu" in here
device: cuda:0 # cuda:id or cpu
Do it mean that if I set the device to "cpu", then I can run the server on
inference_mode =local
on Mac, no matter M1/M2 chip(new Mac) or Intel cpu(old Mac) ?
looks like it's a newly added option, but unfortunately, still no
@Fermain So If we deploy JARVIS in macOS, we can only use the
lite.yaml
(that isinference_mode: huggingface
) right? Because if we useinference_mode:local
(orinference_mode:hybrid
),we should have a Nvidia display card, but Macs have no Nvidia display card, is that right?comment line 298-300(maybe if you didn't reformat this file) in models_server.py file
"midas-control": {sometmodel here}
you can run without nvidia device.very helpful - thanks
But encountered another problem~ My hugginggpt not work~
check your network or your api quota
thanks
How can the generated pictures be accessed?
thanks
How can the generated pictures be accessed?
This is a bug, you should create "images" and "audios" folder under /path/to/JARVIS/server/public/
, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!
thanks
How can the generated pictures be accessed?
This is a bug, you should create "images" and "audios" folder under
/path/to/JARVIS/server/public/
, theoretically, the program should create these two folder automatically, but it didn't ,so this is a bug!
folder has been created
Why does the path for generating images keep changing?
what's wrong?
what's wrong?
Weird, it's should not be like this, please backup you lite.yaml, and force update to the latest commit and try again.
I think the latest commit has fixed this bug. just pull again
following command as recommended to use mps(m1, m2 ,max )
conda install pytorch torchvision torchaudio -c pytorch-nightly
Macs are not using NVIDIA display card, so Mac can not use this right?