-
### What happened?
I'm using the litellm proxy, and when calling `o1-mini` it returns an error.
```python
import litellm
response = litellm.completion(
api_key="sk-xxx",
base_url="https:…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I hav…
-
### Validations
- [X] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](https://githu…
-
**Bug 描述**
使用最新模型o1-mini,api报错,
API Error: Status Code 400, {"error":{"message":"Unsupported value: 'messages[0].role' does not support 'system' with this model. (request id: 20240913101345686…
-
**Describe the bug**
I tried setting up my evaluations as [instructed here](https://www.promptfoo.dev/docs/guides/gpt-vs-o1/) and have been getting no output from `o1-preview`. My `config.yaml` look…
-
Returns an error when using the o1-mini model, no such error with 4o-mini
https://pastebin.com/VgZyjjF6
-
If build_stubdom=true is passed to makepkg, this occurrs
in directory .../src/xen/stubdom/newlib-x86_64/x86_64-xen-elf/newlib/libc/reent
```
gcc -isystem /home/sam/Code/saur/xen/src/xen/stubdom…
-
Hello,
When I look at [LiveCodeBench](https://livecodebench.github.io/leaderboard.html), the other independent benchmark study of LLMs for code, I see that o1-mini is significantly ahead of Claude …
-
### Issue
https://openai.com/index/introducing-openai-o1-preview/
### Version and model info
_No response_