Closed liorbalmas closed 4 months ago
@liorbalmas - how was the project generated? Did you meet any issue during the "Generate Project" step? (the step looks like)
There are several place holders ("<xxx_xxx>") in the draft project and are replaced in the "Generate Project" step.
yes. I went through that screen and it went smooth. But the paths here do not exist (taken from console_chat.py)
additionally, the python environment was not created in WSL. the README assumes it is as it says to runconda activate [conda-env-name] but the environment wasn't created...
I am using anaconda. It's installed directly under c:\ I gave full permissions to my user to anaconda's folder, and now models are created (project generation takes much more time, I can see it downloads them) and when it finishes, on click on the button to open the project, it opens in WSL as expected. So far so good but the conda env was not created during project creation
when I try to run first_time_setup.sh manuallt I get some networking issue
eventually I get this message:
Thanks
Could you please try VSC command palette "AI Toolkit: Validate environment prerequisites" and see if all prerequisites checked? Like:
BTW, did a quick search and it also seems conda version issue itself. E.g. https://stackoverflow.com/questions/68507862/how-to-solve-condahttperror-http-000-connection-failed-error-in-wsl https://stackoverflow.com/questions/67923183/miniconda-on-wsl2-ubuntu-20-04-fails-with-condahttperror-http-000-connection
The issue seems to be my wsl distro (ubuntu) cannot access the internet restarting doesn't help Trying to fix that..
Would have been helpful if AI toolkit project had checked that and displayed a clear error message
fixed that using this answer:
https://stackoverflow.com/a/63578387/2703411
first_time_setup.sh ran sucessfully.
inference is running now
Thank you for you help
Thanks for the good suggestion to add network to prerequisites check. Added to backlog.
Closing as the original issue resolved.
Project's Readme says that
python gradio_chat.py --baseonly
should start a local web server for inference but the command generates this error:it seems like compute_dtype is an input parameter that should be provided by the user after the page and model are loaded