Critical: OpenAI API Conflict Blocking Local LLM Functionality
Summary
Our local LLM setup is non-functional due to an unexpected OpenAI API dependency. This issue is blocking multiple critical features, including PDF embedding and RAG QA processes. Resolving this conflict is crucial for enabling local AI capabilities across our platform.
Description
When attempting to use our local LLM for various tasks (e.g., PDF embedding, RAG QA), the system erroneously tries to connect to OpenAI's API, resulting in authentication or connection errors. This prevents the use of our local AI infrastructure and poses potential privacy and cost concerns.
Impact
PDF embedding fails, breaking document processing pipelines
RAG QA functionality is non-operational
Potential exposure of sensitive data to external APIs
Increased costs due to unnecessary API calls
Inability to leverage local AI capabilities for offline or secure environments
Error Examples
PDF Embedding:
RetryError[<Future at 0x7a1b55f39000 state=finished raised AuthenticationError>]
RAG QA:
tenacity.RetryError: RetryError[<Future at 0x767accd53370 state=finished raised APIConnectionError>]
Root Cause
The system is configured to use OpenAI's API for embeddings instead of the local LLM. This misconfiguration is likely present in multiple components of our AI pipeline.
Steps to Reproduce
Attempt any operation requiring local LLM (e.g., PDF embedding, RAG QA)
Observe the operation fail with OpenAI-related errors
Suggested Fixes
Review and update all embedding configurations to use local LLM
Remove or comment out any hardcoded references to OpenAI API
Check for and remove any conflicting OpenAI environment variables
Verify local LLM service is properly configured and accessible
Update middleware and pipeline configurations to ensure consistent use of local LLM
Implement a check to prevent accidental use of external APIs in local mode
Priority and Next Steps
This issue should be treated as high priority. Resolving it will unblock multiple AI use-cases and ensure the proper functioning of our true local qa rag.
Additional Notes
Fixing this issue is critical for our transition to local AI infrastructure. It will enhance data privacy, reduce operational costs, and enable AI capabilities in offline or secure environments. This resolution will benefit multiple teams and projects relying on our local AI capabilities.
Reproduction steps
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
Description
Critical: OpenAI API Conflict Blocking Local LLM Functionality
Summary
Our local LLM setup is non-functional due to an unexpected OpenAI API dependency. This issue is blocking multiple critical features, including PDF embedding and RAG QA processes. Resolving this conflict is crucial for enabling local AI capabilities across our platform.
Description
When attempting to use our local LLM for various tasks (e.g., PDF embedding, RAG QA), the system erroneously tries to connect to OpenAI's API, resulting in authentication or connection errors. This prevents the use of our local AI infrastructure and poses potential privacy and cost concerns.
Impact
Error Examples
PDF Embedding:
RAG QA:
Root Cause
The system is configured to use OpenAI's API for embeddings instead of the local LLM. This misconfiguration is likely present in multiple components of our AI pipeline.
Steps to Reproduce
Suggested Fixes
Priority and Next Steps
This issue should be treated as high priority. Resolving it will unblock multiple AI use-cases and ensure the proper functioning of our true local qa rag.
Additional Notes
Fixing this issue is critical for our transition to local AI infrastructure. It will enhance data privacy, reduce operational costs, and enable AI capabilities in offline or secure environments. This resolution will benefit multiple teams and projects relying on our local AI capabilities.
Reproduction steps
Screenshots
Logs
No response
Browsers
No response
OS
No response
Additional information
No response