issues
search
vihangd
/
alpaca-qlora
Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA
Apache License 2.0
80
stars
11
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
#8
jkloveg
opened
5 months ago
0
newbie question about 4bit quantization
#7
andreapago
opened
1 year ago
0
8-bit Adam vs 32-bit Adam ??
#6
apachemycat
opened
1 year ago
0
Using alpaca format for the Databricks Dolly 15k dataset
#5
ritabratamaiti
closed
1 year ago
2
Question on differences with artidoro/qlora
#4
gptzerozero
opened
1 year ago
1
ValueError: test_size=2000 should be either positive and smaller than the number of samples 2 or a float in the (0, 1) range
#3
quantumalchemy
opened
1 year ago
1
Lora
#2
ghost
opened
1 year ago
1
Can you please share the results you get with the trained models?
#1
KKcorps
closed
1 year ago
10