Closed odunkel closed 7 months ago
Thank you for your attention. Unfortunately, for the Probing experiments, we validated on the test data directly after each training session without saving the trained classifier. The classifier is a two-layer MLP as follows, which can be easily trained using torch once the data is prepared.
class Net(torch.nn.Module): def init(self, input_size: int = 64 * 64, num_classes: int = 10): super(Net,self).init() self.input_size = input_size self.hidden_size = max(input_size // 8, 2048) self.mlp = torch.nn.Sequential( torch.nn.Flatten(), torch.nn.Dropout(0.1), torch.nn.Linear(in_features=self.input_size, out_features=self.hidden_size), torch.nn.Tanh(),
torch.nn.Linear(in_features=self.hidden_size, out_features=num_classes)
)
def forward(self, x):
output = self.mlp(x)
return output
Hello,
thanks for the very interesting work 'Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing'. Do you also plan to release the code, trained classifiers etc. for performing the probing analysis?
Thanks for your support in advance.