-
Hi!
I'm not super familiar with using .gguf files in a local environment (or anything LLM locally for that matter) but I think I've done the setup correctly:
1. I downloaded wizardLM-7B.Q5_K_S.g…
-
### System Info
macOS 12.6.2
MacBook Pro (16-inch, 2021)
Chip: Apple M1 Max
Memory: 32 GB
I have tried gpt4all versions 1.0.7 and 0.3.6
Python version 3.11
### Information
- [X] The official…
-
### Question
I have tried to use starcoder model by bundling it using your ONNX script but it failed with some exception.
Model: https://huggingface.co/HuggingFaceH4/starchat-beta
or
https://h…
-
Hello,
Can you please post an example of .env.local for:
WizardLM/WizardCoder-15B-V1.0
-
i want to use wizardlm with autogpt, anyone has idea how to do it???
-
The base llama 30B is around 61GB, but the WizardLM delta is 122 GB. Any thought on this?
-
What is the reason for this error when I run this file?
python /home/xxx/Project/WizardLM/WizardCoder/src/humaneval_gen.py
-
# Feature Description
Run [EAGLE](https://github.com/SafeAILab/EAGLE) models.
- blog: https://sites.google.com/view/eagle-llm
# Motivation
Users running with CPU could see impressive speed…
-
On Ubuntu Focal..
```
Current binding: c_transformers
Current model: None
Menu:
1 - WizardLM-Uncensored-Falcon-40b.reference
2 - Install model
3 - Change binding
4 - Back
Enter your choice:…
-
### Feature request
Can you please update the GPT4ALL chat JSON file to support the new Hermes and Wizard models built on LLAMA 2?
### Motivation
Using GPT4ALL
### Your contribution
Awareness. …