issues
search
Lightning-AI
/
litgpt
Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.
https://lightning.ai
Apache License 2.0
6.85k
stars
726
forks
source link
issues
Most commented
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Support QLoRA 4-bit finetuning with bitsandbytes
#275
patrickhwood
closed
9 months ago
44
CUDA Out of Memory for the Falcon 7B model on A100 80GB GPU
#159
k21993
closed
11 months ago
33
Checkpoint is not a checkpoint directory
#861
Rishabh250
closed
4 months ago
31
Solution to loading Llama 2 70B on 8 GPUs?
#456
lifengjin
closed
4 months ago
24
Harcoded incorrect (and repeated) validation example
#796
DavidGOrtega
opened
5 months ago
20
Adding DoRA (Weight-Decomposed Low-Rank Adaptation) to improve LoRA
#934
rasbt
opened
2 months ago
18
OOM with bf16-true, Quantization, for long context length.
#477
KOVVURISATYANARAYANAREDDY
opened
8 months ago
18
Add CodeLlama configs
#472
m0saan
closed
8 months ago
18
Add script to prepare dataset from csv
#462
Anindyadeep
closed
8 months ago
18
Converting models to original format after finetuning wit LoRA and QLoRA
#374
jxtngx
closed
8 months ago
17
Bug: Finetuning on multi-GPU (FSDP) does not initialize with the foundation model
#652
Jeronymous
closed
5 months ago
16
Blockwise quantization only supports 16/32-bit floats, but got torch.uint8 ( `bnb.nf4` quantisation is not working)
#1325
Anindyadeep
opened
4 weeks ago
15
Adds batched inference with left-padding
#886
FlimFlamm
opened
4 months ago
15
Loading fine tuned model
#855
Chasapas
closed
1 month ago
15
Finetune Falcon-40B with adapter_v2.py using 8 A100 80GB GPUs
#207
weilong-web
closed
9 months ago
15
Support Gemma
#940
carmocca
closed
2 months ago
14
BiasMap: individual bias for each module
#878
Andrei-Aksionov
closed
1 month ago
14
falcon-40b out of memory
#165
lynngao
closed
9 months ago
14
Introduce OptimizerArgs and add support for GaLore
#1192
rasbt
opened
1 month ago
13
pretraining on red-pajama running into StopIteration on multiple GPUs
#603
rahul-sarvam
closed
6 months ago
13
Import errors for lm_eval_harness_lora.py
#386
rasbt
closed
8 months ago
13
models finetuned with adapter or adapter_v2 raises KeyError in convert_lit_checkpoint
#355
jxtngx
closed
9 months ago
13
Update download_model_weights.md
#1300
eltociear
closed
1 month ago
12
Add H2O Danube2 Checkpoint
#1282
Dev-Khant
closed
2 weeks ago
12
CodeGemma-7b-it
#1272
Andrei-Aksionov
closed
1 month ago
12
Add `litgpt evaluate` command
#1177
rasbt
closed
1 month ago
12
convert a model(trained by lit-gpt) to AutoModelForCausalLM model
#844
pull-ups
closed
2 months ago
12
bitsandbytes No longer supported on Windows.
#278
gerwintmg
closed
10 months ago
12
Can i finetune falcon-7b with 8gb Vram?
#242
luussta
closed
9 months ago
12
Automatically convert checkpoint after downloading
#953
awaelchli
closed
2 months ago
11
Hang on two gpu training
#901
TeddLi
closed
3 months ago
11
Error: cutlassF: no kernel found to launch
#327
PaulCristina
closed
9 months ago
11
Error in "_merge_no_wait": The config isn't consistent between chunks. This shouldn't have happened.
#1117
eljanmahammadli
opened
2 months ago
10
GemmaMLP uses 'tanh` approximation for GeLU activation
#1004
Andrei-Aksionov
closed
2 months ago
10
Add package cli scripts
#996
carmocca
closed
2 months ago
10
Difference between latest lm-eval-harness and lit-gpt eval
#848
ajtejankar
opened
4 months ago
10
LoRA: support merging with quantized weights
#771
Andrei-Aksionov
closed
5 months ago
10
Add falcon-180B checkpoint support
#503
rasbt
closed
8 months ago
10
BFloat16 is not supported on MPS
#498
darebfh
closed
8 months ago
10
Enabling Multi-GPU Inferencing
#469
babytdream
closed
4 months ago
10
integrate helm
#412
aniketmaurya
closed
5 months ago
10
Generating batch outputs?
#345
ron-vnai
opened
9 months ago
10
Problem in installing dependencies
#167
jeetendraabvv
closed
11 months ago
10
Set the config's block size as the max_seq_length data preparation and fine tuning scripts
#127
iskandr
closed
11 months ago
10
Killed when saving LoRA weights
#1181
alistairwgillespie
closed
1 month ago
9
Drop interleave placement in QKV matrix
#1013
Andrei-Aksionov
opened
2 months ago
9
Relax bitsandbytes requirements
#946
kashif
closed
2 months ago
9
Standardize checkpoints in lit-gpt
#923
awaelchli
closed
2 months ago
9
generate/lora is tied to Alpaca instruction style
#825
DavidGOrtega
closed
2 months ago
9
OOM Error on RTX 3090 24 GB with Llama-2-7B-hf
#553
khizarhussain19
closed
6 months ago
9
Next