-
Past review comments are sometimes very short and might lack enough context to be good examples for finetuning a LLM.
We could prompt GPT-4 with the patch and the provided comment, asking it to exp…
-
# URL
- https://arxiv.org/abs/2402.17193
# Affiliations
- Biao Zhang, N/A
- Zhongtao Liu, N/A
- Colin Cherry, N/A
- Orhan Firat, N/A
# Abstract
- While large language models (LLMs) often ado…
-
Refers to #16
-
Thank you for your work on finetuning LLMs using lora, dora etc. I'm wondering how I can get started to finetune my custom model with torchtune lora. Do you have any suggestions?
-
我想請問,像是pruning、finetuning這些code目前並沒有顯示做完防禦後模型的ASR對嗎?
還是說我有工具可以使用?
![image](https://github.com/user-attachments/assets/8f551b7f-048a-403b-bf4b-9f04ea738f25)
-
Hi,
Thank you for providing a wonderful repository.
My system requirements are quite low and I want to use a pre-trained action recognition model and fine tune it for my data.
Can you please…
-
May I ask is there any possibility that I can fine-tune models for my own dataset? If there is, how can I do that?
-
Hi
When we finetune the last layer with pre-trained weights, does the weights on the last layer only that get updated or the whole weights of the model get updated along with the last layer.
Thank…
-
Hi @albanie
I would like to know if you have some example of how to perform a finetuning starting from the COCO trained model? Most important could you point me out how to do the dataset setup?
…
-
from the conversation with @biancazadrozny and Johannes Schmude:
- Enable more complex embedding networks (convolutions; optionally also pixel shuffle)
- Enable more complex heads (convolutions; opti…