-
For instance, adding LoRA to image encoder? Here is a [repository](https://github.com/25benjaminli/sam2lora) that I made that attempts to use LoRA on the attention in the image encoder although I didn…
-
Thanks for your excellent work!
I encounter a bug when I run `MODEL=facebook/opt-1.3b TASK=RTE EPOCH=5 MODE=random_masking LR=1e-2 MASKING_PROB=0.9999 LOCAL_HOST=0 SEED=0 bash run.sh`
```
Traceba…
-
### Feature request
This request aims to introduce functionality to delete specific adapter layers integrated with PEFT (Parameter-Efficient Fine-Tuning) within the Hugging Face Transformers librar…
-
**Is your feature request related to a problem? Please describe.**
With increasing parameter sizes of pre-trained models, it has become necessary to adapt parameter efficient fine-tuning methods like…
-
We aim to implement a system that leverages distillation and quantization to create a "child" neural network by combining parameters from two "parent" neural networks. The child network should inherit…
-
I would like to add the following publication:
RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair
Automated Program Repair (APR) has evolved significantly with the…
-
Hi,
I've found your paper "Parameter Efficient Fine-Tuning of Pre-trained Code Models for Just-in-Time Defect Prediction." I'm trying to reproduce your results with CodeReviewer. Still, I came up…
-
https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA
_Since I don't have the technical knowledge to know how meaningful or relevant this is for lora in SD I'm posting it here to bring attention …
-
Original Repository: https://github.com/ml-explore/mlx-examples/
Listing out examples from there which would be nice to have. We don't expect the models to work out the moment they are translated to …
-
- [ ] [LoRA Land: Fine-Tuned Open-Source LLMs that Outperform GPT-4 - Predibase - Predibase](https://predibase.com/blog/lora-land-fine-tuned-open-source-llms-that-outperform-gpt-4)
# LoRA Land: Fine…