-
### Do you need to file an issue?
- [ ] I have searched the existing issues and this feature is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model provi…
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Branch name
dev
### Commit ID
152072f
### Other environment information
```Markdown
ASUS ROG Strix…
-
### Do you need to file an issue?
- [ ] I have searched the existing issues and this bug is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
### Is there an existing issue for this?
- [x] I have searched the existing issues
- [x] I have checked [#657](https://github.com/microsoft/graphrag/issues/657) to validate if my issue is covered by …
-
Sys - WIN10
---------
Traceback (most recent call last):
File "C:\Users\LLM\fast_api\installer_files\env\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_glo…
-
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/li…
-
**Describe the bug**
Despite setting `tokens_per_minute` and `max_retries` in `pipeline-settings.yaml`, the system continues to make API calls every second, even though the LLM is rate-limited.
**…
-
### Do you need to file an issue?
- [X] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
I'd like to use LLMLingua with .NEt & Java.
I'd also would like to use GraphRag too
And also, LlamaIndex...
What if we enable an easy way to conteinarize those SDKs or frameworks on the…
-
```
🚀 Reading settings from ragtest\settings.yaml
Traceback (most recent call last):
File "f:\GraphRAG-Ollama-UI\Miniconda3\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, m…