-
im trying to have ctransformers use gpu but it won't work.
my chatdocs.yml:
```yml
ctransformers:
model: TheBloke/Wizard-Vicuna-7B-Uncensored-GGML
model_file: Wizard-Vicuna-7B-Uncensored…
-
RuntimeError: The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0 after I execute this command:python -m fastchat.model.apply_delta --base llama-7b-hf/ -…
-
Hi,
I'm trying to reproduce the results reported on "InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning". But, I'm facing difficulty reproducing the InstructBLIP …
-
I tried to add the DoctorGPT model, modified the model in /public/lib/vicuna-7b, and also modified the config.json to point the cacheUrl to the local model at http://localhost:3000/lib/WebLLM/vicuna-7…
-
1. Wombat-7B and ChatGPT Comparison based on Vicuna test set, score by GPT-4 Evaluation.
```
Wombat-7B: 599.0 average score: 7.5
ChatGPT: 710.5 average score: 8.9
wombat-7b / gpt35 = 84.31%
…
-
First, thanks very much for creating this cool technology.
On one A100 GPU w/ 80GB VRAM, I tried benchmarking `sq-vicuna-7b-v1.3-w3-s0 ` and its base. It is a bit strange that running median time h…
-
Hi @huangb23,
Thanks for sharing code for a great work!
Can you please share the inference code to generate the Stage 3 Dataset from ActivityNet/DiDeMo? Specifically, the inference configuration…
-
### Question
I downloaded '_llava-1.5-7b_' as '_model_base_', and downloaded the lora wights '_llava-v1.5-7b-lora_' as '_model_path_'.
I ran the vqav2.sh provided by the author, trying to reproduc…
-
After I downloaded eachadea/ggml-vicuna-7b-1.1's ggml-vicuna-7b-1.1-q4_0.bin
model from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main, I was able to add Chat Source successfully.
Howe…
-
## Actions taken:
Ran the command python run_localGPT.py --device_type cpu
Ingest.py --device_type cpu was ran before this with no issues.
## Expected result:
For the "> Enter a query:" …