-
Are you planning WizardLM 65B model?
Just asking ;)
Noways running such big models is quite easy with the cheap rtx 3090 and llama.cpp for instance - getting 2 tokens/s
-
-
Hi authors, thank you for releasing code and data for this project. I am confused about the following part in the paper.
> For fair comparison, we replace Alpaca’s original Davici-003 response wit…
-
Just could use some feedback on debugging with ctransformers, have a strange case where things are generally working, but occasionally I don't get output... using /models/WizardLM-Uncensored-Falcon-40…
-
WizardLM is amazing! Now that Llama-2 is out & available for commercial use, will we get a version of WizardLM that's allows for commercial use too?
-
Considering I have metered internet and not so great resources, I followed your guind and the notebook.
I used this yaml:
```
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.…
-
Hi, I noticed that for the V1.0 version, the 7B and 13B models use different conversation prompts. I am wondering that this time for the WizardLM-13B-V1.2 model, what prompt should we use? Should we j…
-
I need to connect the database with wizardlm model
i have created a chain with contains the model and database but when i am trying to generate a query it's getting error.
below is the error infor…
-
I'm trying to use the following as the model id and base name
MODEL_ID = "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ"
MODEL_BASENAME = "wizardLM-7B-GPTQ-4bit.compat.no-act-order.safetensors"
But when…
-
Hello there!
I already managed to launch and make it work, but now I am facing a problem where the langchain is not reading the PDF file
I am using thise LLM:
all_datasets_v4_MiniLM-L6
vicuna-…