Open miaodog opened 2 months ago
https://github.com/jxmorris12/vec2text/blob/master/vec2text/models/inversion_from_logits.py#L138
if we use frozen_embeddings, why not to use the input variable: attention_mask directly, but create a new attention_mask? is there any concern?
https://github.com/jxmorris12/vec2text/blob/master/vec2text/models/inversion_from_logits.py#L138
if we use frozen_embeddings, why not to use the input variable: attention_mask directly, but create a new attention_mask? is there any concern?