issues
search
ludwig-ai
/
ludwig
Low-code framework for building custom LLMs, neural networks, and other AI models
http://ludwig.ai
Apache License 2.0
11.21k
stars
1.19k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
`RESPONSE` contains lot longer text than is expected based on the `output_features` and `max_sequence_length`.
#3985
amankhandelia
closed
4 months ago
2
Actually add support for RSLoRA and DoRA
#3984
arnavgarg1
closed
7 months ago
1
Fix to resume adapter training from an existing adapter weights
#3983
amankhandelia
closed
1 month ago
4
Support for freezing pretrained vision model layers with regex
#3981
ethanreidel
closed
5 months ago
5
Dependency issue
#3980
robhheise
closed
4 months ago
3
Token-level Probability Always 0.0 When Fine-tuning Llama2-7b Model on Single GPU
#3979
MoOo2mini
closed
1 month ago
1
Wandb on ludwigai/ludwig-ray-gpu:latest + ray throws AttributeError: module 'pydantic.fields' has no attribute 'ModelField'
#3978
rahulvramesh
opened
8 months ago
0
Fix for 'upload_to_hf_hub()' path mismatch with 'save()'
#3977
sanjaydasgupta
closed
8 months ago
4
Pin minimum transformers to 4.39 to reduce Llama/Gemma memory pressure
#3976
arnavgarg1
closed
8 months ago
1
update latest version for development
#3975
alexsherstinsky
closed
8 months ago
0
Update ludwig version to v0.10.2
#3974
alexsherstinsky
closed
8 months ago
0
<DO_NOT_MERGE>[TROUBLESHOOTING][BUGFIX]Empty test commit (in order to make pull request) so as to run tests.</DO_NOT_MERGE>
#3973
alexsherstinsky
opened
8 months ago
2
GPU is not available
#3972
LittleStarrider
closed
4 months ago
2
Allow image bytes type during preprocessing
#3971
vijayi1
closed
8 months ago
3
fix: change PROMPT constant to reduce collision
#3970
geoffreyangus
opened
8 months ago
4
enh: enable loading model weights from training checkpoint
#3969
geoffreyangus
closed
8 months ago
3
api.py: corrected save path in 'LudwigModel.save()'
#3968
sanjaydasgupta
closed
8 months ago
0
[WIP] Gradual Unfreezing to mitigate catastrophic forgetting
#3967
ethanreidel
closed
1 month ago
2
Support for Models stored in GCS bucket
#3966
kainspraveen
opened
8 months ago
3
Save ludwig-config with model-weights in output directory
#3965
sanjaydasgupta
closed
8 months ago
16
Save config with weights
#3964
sanjaydasgupta
closed
8 months ago
1
Ray - protobuf issue
#3963
robhheise
opened
8 months ago
5
Improve docker build times for `ludwig-ray` and `ludwig-ray-gpu`
#3962
arnavgarg1
closed
1 month ago
0
Add Ludwig config json to output directory containing model weights
#3961
sanjaydasgupta
closed
8 months ago
3
Dependency issue
#3960
robhheise
closed
8 months ago
8
[BUGFIX] Fixing integration test failures.
#3959
alexsherstinsky
closed
8 months ago
1
enh: add batch size tuning memory limit option to hedge against OOMs
#3958
geoffreyangus
closed
8 months ago
1
Add support for eval batch size tuning for LLMs on local backend
#3957
arnavgarg1
closed
8 months ago
2
[MAINTENANCE] Use latest version of psutil library.
#3956
alexsherstinsky
closed
8 months ago
1
[MAINTENANCE] Comment Out PyTorch Nightly Test
#3955
alexsherstinsky
closed
8 months ago
1
Temporarily disable expensive text metrics
#3954
arnavgarg1
closed
8 months ago
0
Temporarily disable expensive text metrics
#3953
arnavgarg1
closed
4 months ago
1
Fix kube apt source
#3952
noyoshi
closed
8 months ago
0
You are calling `save_pretrained` to a 4-bit converted model, but your `bitsandbytes` version doesn't support it.
#3951
shripadk
closed
1 month ago
4
batch_size > 1 results in NaN loss value
#3950
K-Mistele
closed
4 months ago
12
Add support for RSLoRA and DoRA
#3948
arnavgarg1
closed
9 months ago
1
[MAINTENANCE] Update Ludwig development version.
#3947
alexsherstinsky
closed
9 months ago
1
Update ludwig version to v0.10.1
#3946
alexsherstinsky
closed
9 months ago
0
fix: use eos token in target tensor for instruction-tuning
#3945
geoffreyangus
closed
9 months ago
1
fix: Update imdb_genre_prediction dataset yaml to match dataset
#3944
jeffreyftang
closed
9 months ago
1
Update Ludwig development version.
#3942
alexsherstinsky
closed
9 months ago
0
Ludwig release version change
#3941
alexsherstinsky
closed
9 months ago
0
Pinning transformers to 4.38.1 or above in order to ensure support fo…
#3940
alexsherstinsky
closed
9 months ago
1
Unable to fine-tune when not using quantization
#3939
simaotwx
closed
1 month ago
4
Re-Enable AdaptionPrompt when HuggingFace releases the PEFT fix.
#3938
alexsherstinsky
closed
1 month ago
0
Remove default target modules for Gemma once it's updated in PEFT
#3937
arnavgarg1
closed
4 months ago
1
Add default LoRA target modules for Gemma
#3936
arnavgarg1
closed
9 months ago
1
Disabling AdaptionPrompt till PEFT is fixed.
#3935
alexsherstinsky
closed
9 months ago
1
Update ludwig version to 0.9.4
#3934
alexsherstinsky
closed
9 months ago
2
Retrain previously fine tuned adapter
#3932
raghavbj24
opened
9 months ago
4
Previous
Next