-
![image](https://user-images.githubusercontent.com/66482679/227425313-db1f0742-0492-4723-b992-bd959e658764.png)
I'm running alpaca on my Intel Xeon Silver 4214(4) @ 2.194GHz and I'm having a lot of s…
-
currently only original LORA is supported as not fused adapter, I hope to be able to add the support for QLORA/QA-LORA support for the adapters, without fusing with the base model.
-
Hi,
@j14159, during your talk at EEF you mentioned that not all guards are implemented yet, and that it's a good opportunity for a newcomer. If it's not being tackled by anyone yet, I'd like to imple…
-
File "/home/AI/alpaca-lora/export_state_dict_checkpoint.py", line 68, in unpermute
w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
RuntimeError: shape '[32, 2, 6…
-
Some of us (me) like text-based YAML. Other folks might not. Editors like these might help generate the UI: http://stackoverflow.com/a/999124
https://github.com/gitana/alpaca looks neat. Demo: http:/…
-
We try to implement 4bit-qlora, thanks to the optimized kernel implementation of back-propagation, the fine-tuning speed is similar to 8-bit lora at present. Welcome to use and issue: https://github.c…
-
$ python generate.py --load_8bit --base_model 'decapoda-research/llama-7b-hf' --lora_weights 'tloen/alpaca-lora-7b'
===================================BUG REPORT===================================
…
-
I followed the instructions and I get the below error when running the algo.py:
> pravinraj_vincent@alpaca-uirobots:~$ python3 algo.py
> Traceback (most recent call last):
> File "algo.py", lin…
-
As listed in https://alpaca.point.space/ by hosting the model myself? For example, if I would like to create a character locally, should I further finetune the model with data from, say Genshin Impact…
-
Traceback (most recent call last):
File "/root/llm/Alpaca-CoT-main/app.py", line 15, in
from model_chatglm import ChatGLMForConditionalGeneration, ChatGLMTokenizer
ModuleNotFoundError: No mo…