Open Ragavendra-Vigneshwaran-R opened 3 months ago
i have having some buglooops with pyenv and ruff . check pyproject.toml that ruff if 0.2.2 (can just type it in) and just start make run hunting.toml example:
poerty setup/scripts from project root terminal bash might just mix it all together also, after u have all dependancies... if ai keeps uninstalling lokk for extra 1 extra2 etc =)
[tool.poetry] name = "private-gpt" version = "0.4.0" description = "Private GPT" authors = ["Zylon hi@zylon.ai"]
[tool.poetry.dependencies] python = ">=3.10,<3.11" fastapi = {extras = ["all"], version = "^0.110.0"} python-multipart = "^0.0.9" injector = "^0.21.0" pyyaml = "^6.0.1" watchdog = "^4.0.0" transformers = "^4.38.2" llama-index-core = "^0.10.14" llama-index-readers-file = "^0.1.6" llama-index-llms-llama-cpp = {version = "^0.1.3", optional = true} llama-index-llms-openai = {version = "^0.1.6", optional = true} llama-index-llms-openai-like = {version = "^0.1.3", optional = true} llama-index-llms-ollama = {version = "^0.1.2", optional = true} llama-index-llms-azure-openai = {version = "^0.1.5", optional = true} llama-index-embeddings-ollama = {version = "^0.1.2", optional = true} llama-index-embeddings-huggingface = {version = "^0.1.4", optional = true} llama-index-embeddings-openai = {version = "^0.1.6", optional = true} llama-index-embeddings-azure-openai = {version = "^0.1.6", optional = true} llama-index-vector-stores-qdrant = {version = "^0.1.3", optional = true} llama-index-vector-stores-chroma = {version = "^0.1.4", optional = true} llama-index-vector-stores-postgres = {version = "^0.1.2", optional = true} llama-index-storage-docstore-postgres = {version = "^0.1.2", optional = true} llama-index-storage-index-store-postgres = {version = "^0.1.2", optional = true} psycopg2-binary = {version = "^2.9.9", optional = true} asyncpg = {version = "^0.29.0", optional = true} boto3 = {version = "^1.34.51", optional = true} gradio = {version = "^4.19.2", optional = true} pydantic = "^2.6.4" torch = "^2.2.1" torchvision = "^0.17.1" tensorflow = "^2.16.1"
[tool.poetry.extras] ui = ["gradio"] llms-llama-cpp = ["llama-index-llms-llama-cpp"] llms-openai = ["llama-index-llms-openai"] llms-openai-like = ["llama-index-llms-openai-like"] llms-ollama = ["llama-index-llms-ollama"] llms-sagemaker = ["boto3"] llms-azopenai = ["llama-index-llms-azure-openai"] embeddings-ollama = ["llama-index-embeddings-ollama"] embeddings-huggingface = ["llama-index-embeddings-huggingface"] embeddings-openai = ["llama-index-embeddings-openai"] embeddings-sagemaker = ["boto3"] embeddings-azopenai = ["llama-index-embeddings-azure-openai"] vector-stores-qdrant = ["llama-index-vector-stores-qdrant"] vector-stores-chroma = ["llama-index-vector-stores-chroma"] vector-stores-postgres = ["llama-index-vector-stores-postgres"] storage-nodestore-postgres = ["llama-index-storage-docstore-postgres", "llama-index-storage-index-store-postgres", "psycopg2-binary", "asyncpg"]
[tool.poetry.dev-dependencies] black = "^22.3" mypy = "^0.910" pre-commit = "^2.15.0" pytest = "^6.2.5" pytest-cov = "^3.0.0" ruff = "^0.2.2" pytest-asyncio = "^0.15.1" types-pyyaml = "^6.0.1"
[build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api"
@invisiblepancake Tried your configuration in my setup. But issue still there. Please let me know if there is any other solution:)
poetry install --extras "ui llms-llama-cpp embeddings-huggingface vector-stores-qdrant"
poetry run python scripts/setup
PGPT_PROFILES=local make run
? =)
edd: from gpt root dir ofc (bash from/to the right directory...heheh, got m so many times), and need to get pythons to babblefish? ull figgure by then =)
when i make run: inky@pinky:~$ cd privateGPT/ winky@dinky:~/privateGPT$ PGPT_PROFILES=ollama make run
"
poetry run python -m private_gpt
21:12:00.501 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama']
21:12:07.568 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
21:12:08.854 [INFO ] private_gpt.components.embedding.embedding_component - Initializing the embedding model in mode=ollama
21:12:08.859 [INFO ] llama_index.core.indices.loading - Loading all indices.
21:12:09.128 [INFO ] private_gpt.ui.ui - Mounting the gradio UI, at path=/
21:12:09.205 [INFO ] uvicorn.error - Started server process [464000]
21:12:09.205 [INFO ] uvicorn.error - Waiting for application startup.
21:12:09.205 [INFO ] uvicorn.error - Application startup complete.
21:12:09.206 [INFO ] uvicorn.error - Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
^C21:12:15.523 [INFO ] uvicorn.error - Shutting down
21:12:15.623 [INFO ] uvicorn.error - Waiting for application shutdown.
21:12:15.624 [INFO ] uvicorn.error - Application shutdown complete.
21:12:15.624 [INFO ] uvicorn.error - Finished server process [464000]
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/pinky/privateGPT/private_gpt/main.py", line 11, in
"
its so nice to have for your os =)
I have installed privateGpt in aws ec instance with type g5.2xlarge(8vCPU, 32Gib Memory).I am using llamacpp as LLM TheBloke/Mistral-7B-Instruct-v0.2-GGUF as LLM model in my privategpt steup. In UI, I have upload files which conatins instruction to train models. After that, I asked some question related to it. But it keeps on restarting and throwing the below error like segmentaion fault(core dumped), connection timed out, etc.... Please help me to stablize the privategpt in my setup.(Sometimes after reloading the browser, the UI can answer the questin even that takes about 40 seconds ar above) Errors:
Is this issue related to insuffiecient memory? or Do i need to increase my memory for ec2 to stablize the privategpt?