Open cozeybozey opened 3 years ago
I think the last point makes the most sense, and that they just misuse the term discriminator here. Directly predicting the angle seems the most sensible approach
Okay so in that case what would our generator architecture look like? Should we just do some down convolutions and then a scalar as output?
Should we use like a tanh output activation and then multiply it by 2pi?
That should work. You could also use the inverse of the regular tan function. That should output between -pi and pi.
Okay so I implemented it, but the loss is not going down at all. And then I realised how is the network supposed to predict something that is entirely random? We choose a random angle to encode our data with, how is a network supposed to predict this angle based only on a comparison with the actual angle?
The point is that natural images have some orientation that can be recognised. So if you give the encoded image, it's plausible that in the beginning, the attacker can learn to regognise the rotation angle. What exactly are you passing as input to the model? The rotated features of the converged encoder? Or do you do this before training of the encoder?
I guess that makes sense. I give it the rotated features of the converged encoder.
Then I guess it makes sense that it can't figure out the angle anymore. You could do a sanity check by passing it rotated features from a minimally trained version of the encoder.
I have improved the network and now it does actually learn the correct angles, which I guess is actually a bad thing, but at least the adversary network seems to work.
Hmm that's interesting. Is there a difference with an untrained and a trained encoder? How well is it predicting the angles?
I haven't tried it with untrained encoder yet, I will try that next. I think it is predicting the angles quite well, because I use MSELoss and it gets to around 0.04.
@deZakelijke For inversion attack 1 they say that they use a discriminator to obtain the most likely angle. However we do not understand how the discriminator works. We think it can be one of two things:
The discriminator is exactly the same as the discriminator that we use for training the network, meaning that we give it real and fake features and it has to determine the scores for these features. The real features are just the original a and the fake features are x rotated back with k-1 random angles. Then for the attack itself we rotate x back with k-1 random angles and then the discriminator will give a score to each a_prime and we pick the a_prime (and the corresponding angle) that has the best score. The problem is this is that we create the most likely angle entirely by chance (because we just pick a set amount of random angles for the discriminator), which seems strange.
We also think the discriminator could be there to train an entirely new generator. This generator takes x and outputs the most likely theta, the discriminator then tries to distinguish between the original a and a_prime, created by x rotated back with the most likely angle. The problem with this is that they never talk about a generator, so we don't know whether they actually used one and we also don't know how the generators architecture would look like. It would also raise the question why you would even use a discriminator in the first place, when you can just train the network that outputs an angle directly based on the true angle. So you just determine the loss based on the difference between the predicted angle and the true angle. This approach seems a lot simpler.