-
python qlora.py \
--model_name_or_path /models/guanaco-33b-merged \
--output_dir ./output \
--dataset alpaca \
--do_train True \
--do_eval True \
--do_mmlu_eval True \
…
-
**Details:**
**Here is your result :**
I used the following commands to reproduce the results of using the LLaMA 7B model on the Guanaco (OASST1) dataset:
**CUDA_VISIBLE_DEVICES=2 sh scripts/…
-
[开源十个Java实战项目](https://blog.csdn.net/Hyl_Aa/article/details/131563216)
[深度学习框架](https://answer.baidu.com/answer/land?params=TgVyUf7ep%2FSI4QytXTh1yvHotApaf4%2FWslLZ9gYRzzlLAm56bJOxKCGo7cKs779Q7cjZB…
szbhh updated
2 weeks ago
-
I am on Macos M2 with 24GB Ram and loaded mixtral_7bx2_moe.Q8_0.gguf or guanaco-13b-uncensored.Q4_K_M.gguf
If I select context length to 4096 it will crash when open the chat window. Context length…
-
At some point it would be very nice if there were a script one could run inside cocalc-docker, which would download a local LLM, which could then be used to implement the same things as ChatGPT.
…
-
When I load the model as following, throw the error: Cannot merge LORA layers when the model is loaded in 8-bit mode
How can I load model with 4bit when inferencing?
`
model_path = 'decapoda-resea…
-
Hi, thanks for this awesome work! I am trying to reproduce the results in your paper for the transfer attack. I ran the default `bash run_gcg_transfer.sh vicuna_guanaco 4` but the result is not good. …
-
How can I use the generated embeddings with the generateCompletion() function?
I tried setting it as an option
```
$embeddings = $ollamaClient->generateEmbeddings($documents, 'nomic-embed-text');
…
-
https://colab.research.google.com/drive/17XEqL1JcmVWjHkT-WczdYkJlNINacwG7
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
![colab err](https://github.…
-
https://guanaco-model.github.io/
https://huggingface.co/datasets/JosephusCheung/GuanacoDataset