-
### Describe the issue
我在利用密钥和代理使用openai官方模型的过程中出错了
### Steps to reproduce
使用代理proxy访问openai官方模型,使用个人密钥,其余设置保持默认
### GraphRAG Config Used
```
encoding_model: cl100k_base
skip_workfl…
-
## 1. Title
AWS::CloudFormation::Type
## 2. Scope of request
AWS::CloudFormation::Type - can create resource via API, but not via CloudFormation
## 3. Expected behavior
In Create, the Regis…
-
Is there any way to schedule currently over-limit requests?
e.g. I send a request that I want to happen eventually, but the rate limit has currently been reached.
The service could add the request…
-
### Describe the issue
Just tried to load in a simple txt file.
In the terminal is goes until :
⠼ GraphRAG Indexer
├── Loading Input (text) - 1 files loaded (0 filtered) ━━━━━━━━━━━━━━━━━━━━━━━━…
-
This is my settings.yaml
`
settings.yaml
`
```
llm:
api_key: EMPT
api_base: http://0.0.0.0:9997/v1
type: openai_chat # or azure_openai_chat
model: chatglm3-6b # gpt-4-turbo-preview
…
-
### Describe the issue
Use vllm to launch a local large model, in the style of openai,but it won't work
### Steps to reproduce
step1:python -m vllm.entrypoints.openai.api_server --max-model-len 614…
-
# Context
* Version of iperf3: 3.9 (rhel), 3.14 (ubuntu), 3.8.1 (darwin)
* Hardware: various
* Operating system (and distribution, if any): various
* Other relevant information: from package man…
-
See #59 and #113. In short, things like this:
``` js
var client = knox.createClient({
key: 'ABCGARBAGE',
secret: 'UTTERLYSHOULDNEVERWORK',
bucket: 'mybucket'
});
var putReq = client.putF…
-
### Describe the bug
{"type": "error", "data": "Error executing verb \"cluster_graph\" in create_base_entity_graph: Columns must be same length as key", "stack": "Traceback (most recent call last):\n…
-
The configuration of `tokens_per_minutes` in `settings.yaml` seems not to be adapted by the indexing engine. I've tried setting it to both `50000` and `50_000` (as per the commented example) but I see…
eyast updated
1 month ago