Closed toooooodo closed 8 months ago
Hi @toooooodo thanks for the interest in the work.
It's a multitask model so if you want to use it for the tasks it was trained on, no finetuning is needed. If you have a different, related task you will need to finetune
Hi, thank you for sharing this interesting work. I'm wondering if the method still requires a fine-tuning process? As I understand it, a unified model is obtained by mixing 4 different tasks in the pre-training stage, and then inference can be performed on the test set directly without fine-tuning. However, the Code Availability section in the paper mentions that "model is available for inference, training and finetuning via the GT4SD library". So I am a little confused about the process of training, fine-tuning and inference on the test set.