Open QuantumNtangled opened 3 weeks ago
instead of settings.yaml try putting those in settings-ollama.yaml
vectorstore: database: qdrant
nodestore: database: postgres
qdrant: url: "myinstance1.us-east4-0.gcp.cloud.qdrant.io:6333" api_key: "yB5H0osupersecret"
I am getting further moving it there. Getting a max new threads issue with the Qdrant Client: qdrant_client\http\api_client.py", line 96, in send raise ResponseHandlingException(e) qdrant_client.http.exceptions.ResponseHandlingException: 1 validation error for ParsingModel[InlineResponse2005] (for parse_as_type) obj.result.config.optimizer_config.max_optimization_threads Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.5/v/int_type make: *** [Makefile:36: run] Error 1
I am going to mess around with different configurations. I turned the thread checker to false in the settings.py file, with no resolution. Flipped it back and trying a few more things.
I've just pulled the current code to build a fresh server and it breaks Qdrant external server integration.
qdrant_client.http.exceptions.ResponseHandlingException: 1 validation error for ParsingModel[InlineResponse2005] (for parse_as_type) obj.result.config.optimizer_config.max_optimization_threads Input should be a valid integer [type=int_type, input_value=None, input_type=NoneType] For further information visit https://errors.pydantic.dev/2.5/v/int_type make: *** [Makefile:36: run] Error 1
Any updates on this?.....Many thanks in advance!!!
When running private GPT using Ollama profile and set up for QDrant cloud, it cannot resolve the cloud REST address.
settings.yaml
vectorstore: database: qdrant
nodestore: database: postgres
qdrant: url: "myinstance1.us-east4-0.gcp.cloud.qdrant.io:6333" api_key: "yB5H0osupersecret" collection_name: "make_this_parameterizable_per_api_call" #Added because i was getting a parameter error without it
I get the following error:
20:38:20.924 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'ollama'] 20:38:28.380 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama Traceback (most recent call last): File "C:\Romanzo\env1\Lib\site-packages\injector__init__.py", line 798, in get return self._context[key]