-
how to finetuning Inf-34B? we need examples or tutorials. cannot wait to use this model for development🥰
-
Hi, Thanks for your wonderful work.
I am struggling using my lora tuned model.
I conducted following steps
1. finetuning with lora
- Undi95/Meta-Llama-3-8B-Instruct-hf model base
- llama3 …
-
in the readme of the train folder it says that we can try to start with a small data set (50 utterances), I prepared 50 minutes and the result was bad.
The training lasted for 2200 epochs, and as a …
-
Dear Authors,
Thank you so much for providing this amazing work. I could reproduce the results for lung tumor segmentation on the provided preprocessed dataset. However, I want to finetune the lun…
-
### Checks
- [X] This template is only for usage issues encountered.
- [X] I have thoroughly reviewed the project documentation but couldn't find information to solve my problem.
- [X] I have sea…
-
May I ask for the hyperparameters used for LLaMA finetuning? The learning rate, batch size, EWC coefficient (λ), and the rank and coefficient of LoRA will be helpful.
Thank you!
-
It seems the input size of RealESRGAN_x4plus.pth is 1024 however if my input images are this size I get cuda out of memory. How is this supposed to work using the sub image crops?
I need to have sm…
-
Hi – thank you for open-sourcing this research project! Do you have plans to release code for fine-tuning these models?
-
In your paper, you mention fine-tuning models on the data generated in this repo. Would it be possible to make the model fine-tuning and evaluation code avaiable?
-
Try to improve the baseline for Solidity generation.