FrankWork / fudan_mtl_reviews

TensorFlow implementation of the paper `Adversarial Multi-task Learning for Text Classification`
178 stars 40 forks source link

Adv Loss is not supported by the paper??? #3

Open xljhtq opened 6 years ago

xljhtq commented 6 years ago

Hi, I want to know where the adv loss is different from the domain loss?? In another word, the adv loss in the paper "Adversarial Multi-task Learning for Text Classification" has not described clearly. So i want to know what the equation is??

FrankWork commented 6 years ago

total_loss = task_loss + adv_loss + diff_loss + l2_loss

xljhtq commented 6 years ago

@FrankWork In your code, "total_loss = task_loss + adv_loss + diff_loss + l2_loss" , to minimize the total_loss, then the adv_loss will be decreasing. But in reality, we should let adv_loss increase in order to get the shared feature. So what I should do to maximize the adv_loss or minimize the adv_loss ???

FrankWork commented 6 years ago

there is a function flip_gradient to maximize the adv_loss

ammaarahmad1999 commented 3 years ago

Hi, do you know equivalent function of flip_gradient in pytorch