WeiChengTseng / Pytorch-PCGrad

Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"
BSD 3-Clause "New" or "Revised" License
302 stars 42 forks source link
cifar-100 deep-learning deep-reinforcement-learning gradient-surgery mnist mulit-mnist multi-task-learning multi-task-reinforcement-learning multi-task-rl pytorch pytorch-pcgrad reimplementation reinforcement-learning rl

PyTorch-PCGrad

This repository provide code of reimplementation for Gradient Surgery for Multi-Task Learning in PyTorch 1.6.0.

Setup

Install the required packages via:

pip install -r requirements.txt

Usage

import torch
import torch.nn as nn
import torch.optim as optim
from pcgrad import PCGrad

# wrap your favorite optimizer
optimizer = PCGrad(optim.Adam(net.parameters())) 
losses = [...] # a list of per-task losses
assert len(losses) == num_tasks
optimizer.pc_backward(losses) # calculate the gradient can apply gradient modification
optimizer.step()  # apply gradient step

Training

Please cite as:

@article{yu2020gradient,
  title={Gradient surgery for multi-task learning},
  author={Yu, Tianhe and Kumar, Saurabh and Gupta, Abhishek and Levine, Sergey and Hausman, Karol and Finn, Chelsea},
  journal={arXiv preprint arXiv:2001.06782},
  year={2020}
}

@misc{Pytorch-PCGrad,
  author = {Wei-Cheng Tseng},
  title = {WeiChengTseng/Pytorch-PCGrad},
  url = {https://github.com/WeiChengTseng/Pytorch-PCGrad.git},
  year = {2020}
}