-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
When I try to import ChatMistralAI I get the following error:
```
/Users/Desktop/xamax/server/node_modules/@langchain/mistralai/dist/chat_models.cjs:8
const mistralai_1 = __importDefault(require(…
-
### Your current environment
```text
root@0fca177ad2d4:/workspace# python3 collect_env.py
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build…
-
My system has both an integrated and a dedicated GPU (an AMD Radeon 7900XTX). I see ollama ignores the integrated card, detects the 7900XTX but then it goes ahead and uses the CPU (Ryzen 7900).
I'm…
haplo updated
6 months ago
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
I'm using Vercal AI SDK with Hugging Face Inference API and I really like it!
There's one major issue I have though - occasionally the requests will either 'freeze' or result in a 500 error. So far…
-
Hi,
I'm trying to find class `zcl_oai_01_dotenv` which is referenced in the source code. Also cannot find it on github, https://github.com/search?q=zcl_oai_01_dotenv&type=code
Are there any addit…
-
Hello,
First of all thank you for your work on llamafile it seems like a great idea to simplify model usage.
It seems from the readme that at this stage llamafile does not support AMD GPUs.
The…
-
Presently it is very hard to get a docker container to build with the rocm backend, some elements seem to fail independently during the build process.
There are other related projects with functiona…
-
Team, thank you so much for this wonderful toolkit! we are trying to test the vllm setting with mistralai/Mistral-7B-Instruct-v0.2 model with zero2
![image](https://github.com/OpenLLMAI/OpenRLHF/a…