Sachin19 / mucoco

Official Code for the papers: "Controlled Text Generation as Continuous Optimization with Multiple Constraints" and "Gradient-based Constrained Sampling from LMs"
MIT License
59 stars 5 forks source link

How to apply this to models without tie_embedding_weights? #6

Closed syncdoth closed 1 year ago

syncdoth commented 1 year ago

In the paper, it is noted

"Even if the embedding tables are not shared, this loss may be computed and optimized using vectors from the output embedding table as parameters without any significant loss in performance."

But I am not sure how this really works; does this mean that you are choosing the embedding to optimize to be from the output embedding table only? If you do this, I'm not sure how you would get the input to feed into the model (since the parameters are not input embeddings anymore) and how will the gradient from the current step's loss will reach the embeddings of previous steps, since they are not directly dependent.

Sachin19 commented 1 year ago

Hi,

You are right, if the embeddings are not tied, we would have to use the output embedding table in which case the language model will not provide gradients to the input being generated. So at any update, a token gets signals either from the past tokens (via the usual forward pass) or from the future (via the gradients). If we use the output embeddings, the tokens won't receive gradients from the future tokens.

While it will affect the performance a little bit, the approach can still generate good samples with control because the token will still get gradients on the entire sequence from the control functions, which can now be defined as functions of output embeddings. I did some small-scale experiments with this setting and didn't observe a significant drop in performance (for toxicity avoidance). In general, I have observed that the gradients from the future tokens are not very meaningful especially if the sequence length is long.

Hope this answers your question.

Thanks,