Closed HelloWorldLTY closed 7 months ago
In our re-implementation, $\alpha$ will affect the update of task-specific parameters, following official implementation of Aligned-MTL.
OK, thanks a lot. I also have a question about multi-task learning. If my model performs well for 2 out of 3 tasks, but very bad in the third task (say, pearson correlation <0), and under this case, do you think using gradient alignment can significantly improve the model performance? Thanks.
I think you can try it, maybe it will be useful.
Thanks. Furthermore, could we directly use callback to set up early stop in the training process? Thanks.
No, LibMTL does not support callback.
Closed as no further updates.
Hi, thanks for your great work. I have a quick question about the principle of this paper (aligned-MTL)
https://openaccess.thecvf.com/content/CVPR2023/papers/Senushkin_Independent_Component_Alignment_for_Multi-Task_Learning_CVPR_2023_paper.pdf
It seems that your implementation of algorithm 1 will return alpha, and here my understanding is we can directly replace the gradient matrix G with G\alpha. Is it different from your diagram?
If my understanding is incorrect, how to combine this component to my own dataset? It is a tarbular one and just simple regression tasks.