-
### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Linux
### Whic…
-
**Describe the bug**
Using the latest version, for some reason, I cannot use my LocalAI endpoints at all.
Having first carried over a configuration from an older version and then completely reset se…
-
I tried #26 and gguf model type didn't get picked up by llm until I registered a model with "llm llama-cpp add-model". I'm not sure if this is working as intended - I expected that gguf would appear …
-
hello, I've tried to use more recent models, but it doesn't seem to work..
any hint?
```
data = {
"model": "gpt-4-1106",
"messages": messages,
"max_tokens": 300,
…
-
### Bug Description
I recently conduced a few experiments using RaptorPack and everything looks fine. The only fallback is that the token counter is not working for the RaptorPack so that I cannot ge…
-
I'm getting error `Error: 404 The model `gpt-4-1106-preview` does not exist or you do not have access to it. Learn more: https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4.` after addi…
-
**Bug description**
Website is down. Getting a 404 when trying to visit the root of the site
**Bug solved method**
**Environment information**
- LLM type and model name:
- System ve…
-
Some of the new llama3 models have extended rope max token size limits that have to be sent with the original http request to adjust.
![image](https://github.com/geekan/MetaGPT/assets/6347922/f3190…
snapo updated
3 months ago
-
I installed llm no problem, assigning my openai key, and am able to speak to gpt4 without problem, see the output of my llm models command:
OpenAI Chat: gpt-3.5-turbo (aliases: 3.5, chatgpt)
OpenA…
-
- クリーニング処理の不備?
- → 西原さん
- トークン数不足 or gpt-4-1106-previewとgpt-4の性能差?
- → 西原さん