-
transformers version: `pip install transformers==4.34.0`
ctransformersversion: `pip install ctransformers==0.2.27`
I encounter the following error
```
File ".venv\lib\site-packages\ctransforme…
-
We currently use `models2.json`: https://github.com/simonw/llm-gpt4all/blob/67079c00fa64cba4f163c4579c2c4aab2c91f45a/llm_gpt4all.py#L44-L49
Looks like they introduced `models3.json` two months ago:…
-
I installed llm no problem, assigning my openai key, and am able to speak to gpt4 without problem, see the output of my llm models command:
OpenAI Chat: gpt-3.5-turbo (aliases: 3.5, chatgpt)
OpenA…
-
I don't understand how to use it, now I'll try to describe what exactly doesn't work.
I downloaded the ollama model (tried several different ones, including llama3), installed everything, everythin…
-
### System Info
GPT4ALL v2.6.2
Windows 11 Pro build 22631
Python 3.11.0
Any time i attempt to use a model with GPU enabled, the entire program crashes. it refuses to use my GPU. i've tried vario…
-
A new set of 7b foundational models that claim to beat all 13b Llama 2 models in benchmarks.
https://huggingface.co/mistralai/Mistral-7B-v0.1
https://huggingface.co/mistralai/Mistral-7B-Instruct-v…
-
Anh ơi em đọc readme.md thì hơi rối ạ aanh có thể hướng dẫn chi tiết cách cài llma cho python được không ạ?
-
As part of my automated scale test, I observe that the InferenceService sometimes reports as `Loaded`, but the call to GRPC endpoint returns with errors.
Examples:
```
set -o pipefail;
i=0;…
-
When `genai-perf` is installed using `pip` from Github (as documented), on first run it tries to download several files from Huggingface, like this:
```
$ docker run --rm -it --name test -u 0 gpu-tr…
-
I got https://huggingface.co/TheBloke/Llama-2-13B-GPTQ to work, but using exactly the same strategy for https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ, I get the following error:
````
…