-
### Do you need to file an issue?
- [X] I have searched the existing issues and this bug is not already filed.
- [X] My model is hosted on OpenAI or Azure. If not, please look at the "model provid…
-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
```console
❯ ./convert.rb
Skipping (unsupported file extension): Caching
Skipping (unsupported file extension): Tech Projects
Skipping (unsupported file extension): Docker
Skipping (unsupported f…
-
# Template
## 느낀점
## 정리
## 몰랐던 내용
## 더 알아본 내용
## 질문
-
#### Expected Behavior
Able to download options data with IQFeed
#### Actual Behavior
Runtime exception looking up IQFeed Symbols
```
20240719 00:41:18.959 ERROR:: IQFeedFileHistoryProvider.P…
-
settings.yaml
config the llm to llama3 in groq or any other model compatible with OAI API.
```yaml
llm:
api_key: ${GRAPHRAG_API_KEY}
type: openai_chat # or azure_openai_chat
model: llama…
-
When I use the documented burst setting in resty.limit.req, but it appears higher number of request were rejected (503). For example when I set the burst at 10 like this:
```
worker_processes 1;
er…
-
I'm running into issues pushing/pulling large files (over 10MB) from traditional Github repositories (our `news` and `courses` scrapers are culprits here). Rather than dealing with the [Git Database A…
-
With the increasing number of channels, synchronous call to all methods causes larger and larger delay. Before it exceeds human tolerable span, we should consider an asynchronous message queue.
The …
-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model provid…