getumbrel / llama-gpt

A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device. New: Code Llama support!
https://apps.umbrel.com/app/llama-gpt
MIT License
10.68k stars 687 forks source link

Stuck in an infinite loop while installing #44

Open D2567 opened 1 year ago

D2567 commented 1 year ago

llama-gpt-llama-gpt-api-7b-1 | Warning: Failed to create the file /models/llama-2-7b-chat.bin: Permission llama-gpt-llama-gpt-api-7b-1 | Warning: denied 0 3616M 0 15819 0 0 38489 0 27:21:54 --:--:-- 27:21:54 38489 llama-gpt-llama-gpt-api-7b-1 | curl: (23) Failure writing output to destination llama-gpt-llama-gpt-api-7b-1 | Download failed. Trying with TLS 1.2... llama-gpt-llama-gpt-api-7b-1 | % Total % Received % Xferd Average Speed Time Time Time Current llama-gpt-llama-gpt-api-7b-1 | Dload Upload Total Spent Left llama-gpt-llama-gpt-api-7b-1 | Speed 0 0 0 0 0 0 0 llama-gpt-llama-gpt-api-7b-1 | 0 llama-gpt-llama-gpt-api-7b-1 | --:--:-- llama-gpt-llama-gpt-api-7b-1 | -- llama-gpt-llama-gpt-api-7b-1 | : llama-gpt-llama-gpt-api-7b-1 | --:-- --: llama-gpt-llama-gpt-api-7b-1 | --:-- llama-gpt-llama-gpt-api-7b-1 | 0 100 1260 100 1260 0 0 6331 0 --:--:-- --:--:-- --:--:-- 6363 llama-gpt-llama-gpt-api-7b-1 | Warning: Failed to create the file /models/llama-2-7b-chat.bin: Permission llama-gpt-llama-gpt-api-7b-1 | Warning: denied 0 3616M 0 15819 0 0 47647 0 22:06:19 llama-gpt-llama-gpt-api-7b-1 | --:--:-- 22:06:19 47647 llama-gpt-llama-gpt-api-7b-1 | curl: (23) Failure writing output to destination llama-gpt-llama-gpt-api-7b-1 | python3 setup.py develop llama-gpt-llama-gpt-api-7b-1 | /usr/local/lib/python3.11/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated. llama-gpt-llama-gpt-api-7b-1 | !! llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | **** llama-gpt-llama-gpt-api-7b-1 | Please avoid running setup.py and easy_install. llama-gpt-llama-gpt-api-7b-1 | Instead, use pypa/build, pypa/installer or other llama-gpt-llama-gpt-api-7b-1 | standards-based tools. llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | See https://github.com/pypa/setuptools/issues/917 for details. llama-gpt-llama-gpt-api-7b-1 | **** llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | !! llama-gpt-llama-gpt-api-7b-1 | easy_install.initialize_options(self) llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | [0/1] Install the project... llama-gpt-llama-gpt-api-7b-1 | -- Install configuration: "Release" llama-gpt-llama-gpt-api-7b-1 | -- Up-to-date: /app/_skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so llama-gpt-llama-gpt-api-7b-1 | copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so -> llama_cpp/libllama.so llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | running develop llama-gpt-llama-gpt-api-7b-1 | /usr/local/lib/python3.11/site-packages/setuptools/_distutils/cmd.py:66: SetuptoolsDeprecationWarning: setup.py install is deprecated. llama-gpt-llama-gpt-api-7b-1 | !! llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | **** llama-gpt-llama-gpt-api-7b-1 | Please avoid running setup.py directly. llama-gpt-llama-gpt-api-7b-1 | Instead, use pypa/build, pypa/installer or other llama-gpt-llama-gpt-api-7b-1 | standards-based tools. llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | See https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html for details. llama-gpt-llama-gpt-api-7b-1 | **** llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | !! llama-gpt-llama-gpt-api-7b-1 | self.initialize_options() llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | running egg_info llama-gpt-llama-gpt-api-7b-1 | writing llama_cpp_python.egg-info/PKG-INFO llama-gpt-llama-gpt-api-7b-1 | writing dependency_links to llama_cpp_python.egg-info/dependency_links.txt llama-gpt-llama-gpt-api-7b-1 | writing requirements to llama_cpp_python.egg-info/requires.txt llama-gpt-llama-gpt-api-7b-1 | writing top-level names to llama_cpp_python.egg-info/top_level.txt llama-gpt-llama-gpt-api-7b-1 | reading manifest file 'llama_cpp_python.egg-info/SOURCES.txt' llama-gpt-llama-gpt-api-7b-1 | adding license file 'LICENSE.md' llama-gpt-llama-gpt-api-7b-1 | writing manifest file 'llama_cpp_python.egg-info/SOURCES.txt' llama-gpt-llama-gpt-api-7b-1 | running build_ext llama-gpt-llama-gpt-api-7b-1 | Creating /usr/local/lib/python3.11/site-packages/llama-cpp-python.egg-link (link to .) llama-gpt-llama-gpt-api-7b-1 | llama-cpp-python 0.1.78 is already the active version in easy-install.pth llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Installed /app llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Processing dependencies for llama-cpp-python==0.1.78 llama-gpt-llama-gpt-api-7b-1 | Searching for diskcache==5.6.1 llama-gpt-llama-gpt-api-7b-1 | Best match: diskcache 5.6.1 llama-gpt-llama-gpt-api-7b-1 | Processing diskcache-5.6.1-py3.11.egg llama-gpt-llama-gpt-api-7b-1 | Adding diskcache 5.6.1 to easy-install.pth file llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Using /usr/local/lib/python3.11/site-packages/diskcache-5.6.1-py3.11.egg llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Searching for numpy==1.26.0b1 llama-gpt-llama-gpt-api-7b-1 | Best match: numpy 1.26.0b1 llama-gpt-llama-gpt-api-7b-1 | Processing numpy-1.26.0b1-py3.11-linux-x86_64.egg llama-gpt-llama-gpt-api-7b-1 | Adding numpy 1.26.0b1 to easy-install.pth file llama-gpt-llama-gpt-api-7b-1 | Installing f2py script to /usr/local/bin llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Using /usr/local/lib/python3.11/site-packages/numpy-1.26.0b1-py3.11-linux-x86_64.egg llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Searching for typing-extensions==4.7.1 llama-gpt-llama-gpt-api-7b-1 | Best match: typing-extensions 4.7.1 llama-gpt-llama-gpt-api-7b-1 | Adding typing-extensions 4.7.1 to easy-install.pth file llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Using /usr/local/lib/python3.11/site-packages llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Finished processing dependencies for llama-cpp-python==0.1.78 llama-gpt-llama-gpt-api-7b-1 | Initializing server with: llama-gpt-llama-gpt-api-7b-1 | Batch size: 2096 llama-gpt-llama-gpt-api-7b-1 | Number of CPU threads: 12 llama-gpt-llama-gpt-api-7b-1 | Number of GPU layers: 0 llama-gpt-llama-gpt-api-7b-1 | Context window: 4096 llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-api-7b-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:127: UserWarning: Field "modelalias" has conflict with protected namespace "model". llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | You may be able to resolve this warning by setting model_config['protected_namespaces'] = ('settings_',). llama-gpt-llama-gpt-api-7b-1 | warnings.warn( llama-gpt-llama-gpt-api-7b-1 | llama-gpt-llama-gpt-api-7b-1 | Traceback (most recent call last): llama-gpt-llama-gpt-api-7b-1 | File "", line 198, in _run_module_as_main llama-gpt-llama-gpt-api-7b-1 | File "", line 88, in _run_code llama-gpt-llama-gpt-api-7b-1 | File "/app/llama_cpp/server/main.py", line 46, in llama-gpt-llama-gpt-api-7b-1 | app = create_app(settings=settings) llama-gpt-llama-gpt-api-7b-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ llama-gpt-llama-gpt-api-7b-1 | File "/app/llama_cpp/server/app.py", line 317, in create_app llama-gpt-llama-gpt-api-7b-1 | llama = llama_cpp.Llama( llama-gpt-llama-gpt-api-7b-1 | ^^^^^^^^^^^^^^^^ llama-gpt-llama-gpt-api-7b-1 | File "/app/llama_cpp/llama.py", line 317, in init llama-gpt-llama-gpt-api-7b-1 | raise ValueError(f"Model path does not exist: {model_path}") llama-gpt-llama-gpt-api-7b-1 | ValueError: Model path does not exist: /models/llama-2-7b-chat.bin llama-gpt-llama-gpt-api-7b-1 exited with code 1 llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available... llama-gpt-llama-gpt-ui-1 | [INFO wait] Host [llama-gpt-api-7b:8000] not yet available...

This has been on loop for the past 4 hours. Tried installing the smallest and largest (7B and 70B) models. Windows 10 Pro i7-8700K CPU 3.70GHz 64.0 GB RAM GTX 1080

D2567 commented 1 year ago

Well I found a solution at least for the 7B model just delete the files and try installing it again. I deleted the files 2 times and the second time it worked. I'm about to try installing the 70B model now.

lityrdef commented 1 year ago

The error message you're encountering is because of permission issues on the /models directory. The script is trying to download a model and save it in the /models directory but it doesn't have the necessary permissions to do so.

Here are some possible solutions:

  1. Change the permissions of the /models directory: You can give write access to the user running the script by using the chmod command. For example, chmod 777 /models will give all permissions to all users for this directory. Be careful with this command as it can potentially be a security risk.

  2. Run the script as a superuser: If you trust the script, you can run it as a superuser which will give it all necessary permissions. You can do this by using the sudo command, e.g. sudo ./run-mac.sh --model 13b.

Also, the warning from setuptools is suggesting to avoid using setup.py and easy_install for package management due to their deprecation. Instead, you should use more modern tools like pip or conda. This is a warning and shouldn't prevent your script from running, but it's something you might want to consider if you're maintaining this script.

By GPT-4

jeffreyjackson commented 1 year ago

I was able to get the 70b model to work, you just need to wait it out. Im using docker and needed to bump up the memory quite a bit. you can watch activity monitor and it will eventually load. It's just loading up in memory.

However, I did need to update the permissions on the models/ dir and after that everything seemed to work as intended.