Closed samp1203 closed 4 months ago
Assuming you're going for an GPU installation, ensure your CUDA drivers are at the version required to run Concierge. That said, the errors from your terminal would be more helpful for solving your issue. Assuming the issue is with your CUDA drivers as it was with mine, try the following:
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-510
sudo reboot
Assuming you're going for an GPU installation, ensure your CUDA drivers are at the version required to run Concierge. That said, the errors from your terminal would be more helpful for solving your issue. Assuming the issue is with your CUDA drivers as it was with mine, try the following:
sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update sudo apt install nvidia-driver-510 sudo reboot
Hello, Thank you for the reply, here are the errors:
Loading Language Model: 3.83GB [11:56, 5.74MB/s] MilvusException: <MilvusException: (code=2, message=Fail connecting to server on 127.0.0.1:19530. Timeout)> Traceback:
File "/home/sam/Downloads/concierge-main/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 584, in _run_script
exec(code, module.dict)
File "/home/sam/Downloads/concierge-main/pages/Loader.py", line 21, in
Is there a way to convert to a CPU install?
For a CPU install, I assume your best bet will be to choose the default CPU option when running install.py.
Again thank you for the quick reply...
Any way you can explain whats causing the above errors?
Since you're interested in leveraging AI tools, I recommend showing your errors to your favorite AI tool for an explanation. Here's what ChatGPT 4 says about it:
The error you encountered, MilvusException: (code=2, message=Fail connecting to server on 127.0.0.1:19530. Timeout), suggests that the Python script is unable to establish a connection to the Milvus server, which is expected to be running on the local machine at port 19530. This could be due to several reasons:
Milvus Server Not Running: Ensure the Milvus server is up and running on your local machine. You may need to start the server if it isn't running. Incorrect Port: Double-check that the Milvus server is configured to listen on port 19530. This configuration can typically be found in the Milvus server's settings or configuration files. Firewall or Network Issue: Verify that your firewall settings or network configuration are not blocking connections to this port. Server Overload or Timeout Settings: If the server is running but overloaded, it might not respond quickly enough, leading to a timeout. You may consider increasing the timeout setting in your script if possible, or ensure the server has sufficient resources and isn't overloaded. To troubleshoot and resolve this issue, you could take the following steps:
Check Server Status: Use tools like systemctl status (for Linux) or relevant commands in other operating systems to check if the Milvus server is actively running. Review Logs: Check the logs of the Milvus server for any errors or messages that could give more insight into why the connection is failing. Increase Timeout: Modify the timeout setting in the connection parameters to see if giving more time for connection helps. This is typically an argument you can pass to the connect() function in the Milvus client. Try Manual Connection: Try to manually connect to the server using the same address and port with a simple script or tool to verify that the server is accessible.
Whats the command to remove/uninstall so I can reinstall it?
Running install.py will allow you to remove previously installed components.
0.3.0 release switches from Milvus to OpenSearch database backend, please create a new issue if you're still having trouble with 0.3.0.
Has anyone encountred these 2 errors:
Ollama server not found, please ensure the ollama Docker container is running! If so you may have to take down the docker compose and put it up again
Milvus database not found, please ensure the milvus-standalone, etcd and minio Docker containers are running! If so you may have to take down the docker compose and put it up again