-
When I try to load WizardLM into my CMD it says "Loading wizardLM-7B-GPTQ-4bit-128g... Can't determine model type from model name. Please specify it manually using --model_type argument" and then clos…
-
https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0
-
When I run my code that uses gptQ, I get this warning and it simply crushes when I load the WizardLM-13B model!
WARNING:accelerate.utils.modeling:The safetensors archive passed at C:\Users\aloui\Do…
-
### Model introduction
The model was finetuned by AIGCode based on DeepSeek-Coder-6.7B-base using open-source and private datasets.
### Model URL
https://huggingface.co/aigcode/AIGCodeGeek-DS…
-
Llamax code it knows how to handle alpaca formatted QA data, but I didnt' see anything in there to handle ShareGPT format data,
How do I finetune with the new format? Your finetune guide (https://…
-
I'm getting the following error when trying to run the [WizardLM-13B Q8](https://huggingface.co/localmodels/WizardLM-13B-v1.1-ggml/resolve/main/wizardlm-13b-v1.1.ggmlv3.q8_0.bin) model. I'm running th…
-
related to https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard but more meaningful scores
https://github.com/h2oai/h2ogpt/blob/ba6cad3207f8319b5c5f4b1e9099d7b909fdb661/generate.py#L132…
-
https://huggingface.co/eugenepentland/Minotaur-13b-Landmark
https://huggingface.co/eugenepentland/WizardLM-7B-Landmark
aseok updated
11 months ago
-
Hi! First of all, great job on this! Running the 13B model with your repo and LangChain works great for me!
There's now a 30B GPTQ model, and I can start it in Oobabooga by just **not** specifying …
-
**Problem**
Jan is only support 1 gguf model file at a time
**Success Criteria**
We can help users to merge gguf files into 1 and load the model for them
**Additional context**
Approach
http…