Open iulix21 opened 5 months ago
Could you give us more details: LLM backend, if it's running on Docker, etc? Please check https://github.com/zylon-ai/private-gpt/issues/1955 if you are using LlamaCPP
Could you give us more details: LLM backend, if it's running on Docker, etc? Please check #1955 if you are using LlamaCPP
I am using llama-cpp (but at the moment switched to ollama) with docker on win11 machine. model. mistral-7b-v0.2
Could you share your stack trace? I just ran it using Docker+Ollama, and everything is working as expected (using macOS; I don't have access to a Windows machine).
I have a problem, when i'm trying to post request twice from 2 devices server get fail and restarts my docker container. in the logs is writen about segmentation fault and not more information. request:
my pc parameters: CPU: i9-14gen GPT: nvidia RTX-4090 RAM: 64GB DDR5