-
### Feature request
Can you please update the GPT4ALL chat JSON file to support the new Hermes and Wizard models built on LLAMA 2?
### Motivation
Using GPT4ALL
### Your contribution
Awareness. …
-
Hi, hello, I made fine-tuning based on WizardLM/WizardCoder-15B-V1.0, I trained the machine to be 8*V100 32G, trained for 22 hours, and then tested with checkpoint 1600
But the effect is very unsat…
-
This is a small, initial list (feel free to suggest in the comments) of models that should be at least present IMHO:
- [x] stable-diffusion
- [ ] whisper
- [ ] wizardLM (no links, only configura…
-
Hi I get wierd output when I try to invoke model.generate using inference script ,but same prompt gives an expected output when chat demo is used and also it takes too long to infer with single gpu…
-
### Your current environment
```text
PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC vers…
-
So when the foundation model can be changed to llama2, would u please update it ,and the license will be more friendly for others
-
@nlpxucan could you upload the current gradio source scripts for the current wizardlm 13b trained on 250k dataset.
also would be cool if you guys could make a chat UI.
-
This is more a question than a feature request, but since there is no question type I just put it here.
Can this work locally with a local model like WizardLM or GPT4-x-Vicuna and with the GPU?
…
-
Hello!
thank you for this.
is there any chance getting a gui without needing to install python like koboldcpp in the future for its portability?
also, can we use this with other models like w…
-
Unexpected behaviour: For models `ggml-mpt-7b-base`, `ggml-mpt-7b-instruct`, `ggml-gpt4all-j-v1`, after the first prompt the model is not downloaded, instead an error occurs `Error: Model filename not…