-
### ⚠️ Check for existing issues before proceeding. ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Where are you using SuperAGI?
Linux
### …
-
### When running the llama-2-7b-chat-hf model with openai api for gsm8k(Mathematical Ability Test), it needs to set temperature=0.0
But I get unexpected error like
> lm_eval --model local-chat-com…
-
I followed the step-by-step instructions but upon running make run to check if it all worked, I get the following error message:
File "/home/ubuntu/.cache/pypoetry/virtualenvs/private-gpt-Wtvj2B-w-…
-
As we worked on typia, I'll do this
-
`llama.onnx` is primarily used for understanding LLM and converting it to NPU.
If you are looking for inference on Nvidia GPU, we have released lmdeploy at https://github.com/InternLM/lmdeploy.
…
-
# TL;DR
Currently org-ai uses OpenAI API. In some cases someone might use different API or even local LLM.
# Context
There are many interesting open-source models that can be deployed on cons…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
When running the docker instance of privategpt with Ollama, I get an error saying: TypeError: missin…
-
## Problem
Many organizations have a wealth of internal knowledge contained within their own documents. However, accessing this knowledge can often be a cumbersome and time-consuming process. Users m…
-
### Feature request
We would like to implement fine-tuning.
This task involves considering the tradeoffs between various approaches to improving action completions and outcome evaluation via fin…
-
Argilla integration, dataset integration etc.
detail to follow.
pngwn updated
2 months ago