issues
search
johnsmith0031
/
alpaca_lora_4bit
MIT License
534
stars
84
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ModuleNotFoundError: No module named 'autograd_4bit'
#54
WUHU-G
closed
1 year ago
4
error in the tokenizer class after loading the model
#53
YrKpFk
opened
1 year ago
2
Added 4 bit backward using triton.
#52
qwopqwop200
opened
1 year ago
1
AttributeError: module 'gptq_llama.quant_cuda' has no attribute 'vecquant4recons_v1'
#51
WUHU-G
closed
1 year ago
3
How to finetune 30B and can it be done with dual RTX 3090's?
#50
juanps90
opened
1 year ago
15
TypeError: cannot assign 'torch.cuda.HalfTensor' as parameter 'bias' (torch.nn.Parameter or None expected)
#49
Flucc
opened
1 year ago
4
Does the inference.py script actually load the generated Lora weights?
#48
factoidforrest
opened
1 year ago
6
fix gpt4all training to more closely match the released logic, other small fixes and optimizations
#47
winglian
closed
1 year ago
1
Runtime Error: Expected to mark a variable ready only once
#46
Tameflame
closed
1 year ago
1
fix missing paren
#45
winglian
closed
1 year ago
0
peft.tuners.lora missing Linear4bitLt?
#44
SpaceCowboy850
closed
1 year ago
2
CPU finetuning
#43
maxxk
opened
1 year ago
1
better multi-gpu support, support gpt4all training data
#42
winglian
closed
1 year ago
1
Success
#41
ehartford
closed
1 year ago
5
lora shape mismatch for 13B llama
#40
winglian
closed
1 year ago
1
properly include the eos token so inference doesn't blabber on
#39
winglian
closed
1 year ago
1
RuntimeError: expected scalar type Float but found Half
#38
ehartford
opened
1 year ago
2
convert tensor type to match for torch.matmul
#37
winglian
closed
1 year ago
1
add environment.yml?
#36
ehartford
closed
1 year ago
0
Is the training data prepared correctly?
#35
turboderp
opened
1 year ago
2
fixes for most recent update
#34
winglian
closed
1 year ago
1
ModuleNotFoundError: No module named 'quant_cuda'
#33
winglian
closed
1 year ago
2
AttributeError: module 'quant_cuda' has no attribute 'vecquant4recons'
#32
ehartford
closed
1 year ago
4
Can't compile GPTQ fork
#31
brandonj60
closed
1 year ago
6
backwards support for pre-py3.10, add datasets requirement used in train
#30
winglian
closed
1 year ago
1
lora of 65B-4bit
#29
dpyneo
opened
1 year ago
11
Stop generating...
#28
johnrobinsn
closed
1 year ago
3
Can 30b model be trained from 0 on a single 4090 GPU?
#27
OlegJakushkin
opened
1 year ago
5
I ported inference to OPT
#26
Ph0rk0z
closed
1 year ago
2
GPTQv2 model doesn't load
#25
s4rduk4r
closed
1 year ago
36
ValueError:checkpoint` should be the path .. when run server.py
#24
sofq
closed
1 year ago
1
Get dependencies straight from pip!
#23
sterlind
closed
1 year ago
1
AttributeError: module 'quant' has no attribute 'quant_cuda'
#22
ehartford
closed
1 year ago
6
How to use the finetuned model?
#21
LoopControl
closed
1 year ago
1
Enable model parallelism and distributed data parallelism for multi-gpu setups
#20
kooshi
closed
1 year ago
1
4-bit Alpaca weights in PyTorch format?
#19
francis2tm
opened
1 year ago
1
Refactor finetune.py
#18
s4rduk4r
closed
1 year ago
3
Train lora with embed_tokens and lm_head
#17
KohakuBlueleaf
opened
1 year ago
7
AttributeError: module 'quant' has no attribute 'quant_cuda'. Also got 'CUDA extension not installed.'
#16
lolxdmainkaisemaanlu
closed
1 year ago
2
Stop generation.
#15
Ph0rk0z
closed
1 year ago
1
Fix cuda kernel for Pascal & Cuda 6/6.1
#14
Ph0rk0z
closed
1 year ago
4
Merging changes upstream?
#13
wywywywy
opened
1 year ago
25
Merge lora permanently?
#12
Ph0rk0z
opened
1 year ago
6
4-bit quantized weights for 7b version
#11
francis2tm
opened
1 year ago
3
Monkey patch is bad.
#10
Ph0rk0z
closed
1 year ago
4
The atomic add doesn't work on compute 6.1
#9
Ph0rk0z
closed
1 year ago
5
Share new re-quantized model
#8
Curlypla
opened
1 year ago
9
Unbelievably good perf..
#7
sterlind
opened
1 year ago
13
Fix path to autograd_4bit.py in install.sh
#6
sterlind
closed
1 year ago
1
Complie error?
#5
nepeee
closed
1 year ago
4
Previous
Next