I implemented a 5-second timeout for fetching AI provider model lists to improve reliability and performance. This change affects the OllamaClient and ModelService, preventing model fetching operations from hanging indefinitely.
The UI currently waits for all model fetching to finish before loading the main chat, and improved UX should be considered for this as it currently only shows a blank screen.
Added a 5-second timeout to the Axios GET request in OllamaClient when fetching model tags.
Implemented a 5-second timeout for the fetchModels function in ModelService.
Other Changes
Refactored the logAxiosError utility into a new axios.js file, exporting it as part of an axiosHelpers object.
Updated import statements and module exports to reflect the new file structure.
Adjusted the fetchOpenAIModels function to use baseURL as the 'name' property for fetchModels, to better identify which endpoint is being fetched from.
Made minor adjustments to comments and type definitions for clarity.
Testing
To test these changes:
Ensure that all AI provider endpoints (OpenAI, Ollama, etc.) are properly configured.
Attempt to fetch model lists from various providers.
Verify that requests timeout after 5 seconds if no response is received.
Check that error handling correctly captures and logs timeout errors.
Test Configuration:
Ensure you have a stable internet connection.
Configure multiple AI providers in your environment.
If possible, simulate slow network conditions to trigger the timeout.
Checklist
[x] My code adheres to this project's style guidelines
[x] I have performed a self-review of my own code
[x] I have commented in any complex areas of my code
[x] My changes do not introduce new warnings
[x] I have tested the timeout functionality with various providers
Summary
I implemented a 5-second timeout for fetching AI provider model lists to improve reliability and performance. This change affects the OllamaClient and ModelService, preventing model fetching operations from hanging indefinitely.
The UI currently waits for all model fetching to finish before loading the main chat, and improved UX should be considered for this as it currently only shows a blank screen.
Other Changes
baseURL
as the 'name' property forfetchModels
, to better identify which endpoint is being fetched from.Testing
To test these changes:
Test Configuration:
Checklist