-
We currently used different embedding models and tokenizers for our builtin source storages:
- Chroma:
https://github.com/Quansight/ragna/blob/e85129752682e38dc7f2ef9622446f3ba5a168e9/ragna/sour…
-
Trying to use the new gpt-3.5-turbo model causes an error:
```
llm = OpenAI(model_name="gpt-3.5-turbo")
# ...
llm.run(...)
InvalidRequestError: Invalid URL (POST /v1/completions)
```
I also…
-
**Describe the bug**
I'm using API, it's succdesfully showing models and using 3.5 Turbo most of the time.
When I select GPT-4 above the comment box it does not use the GPT-4 model.
I confirmed thi…
m0rg5 updated
10 months ago
-
你好,请问关于配置海外Proxy是怎么样配置呢?
-
I use the command :
alpaca_eval --model_outputs '/home/zhoudong/repos/alpaca_eval/alpaca_data/Infini-Megrez-7b-20231114-v2.json' --annotators_config 'text_davinci_003' --reference_outputs '/home/zho…
-
Hi!
Asked this on discord also, but was redirected here. Tried to find a solution, but did not find anything matching. I'm trying to setup the GitHub Action version of PR-Agent to give review or desc…
-
When I ask the first question, the chat responds. But when I ask the second question, the chat avatar appears and the text box appears, but it remains blank, there is no response.
-
有空闲服务器,不懂如何部署,巨佬有空出个教程
-
### Describe the bug
Hello,
While using function call, we met a bug that occurs occasionally. Even in GPT4.
Instead of returning a json with key "function_call", which it should be, GPT ret…
-
Hello! I'm using docker on macOS to spin up the app. I changed the model on line 26 of config.py to `self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt3.5-turbo")`. I restarted the docker contai…