Right now we want to clarify certain things due to time constraints:
Since we are not sure whether we will manage or not until the deadline, we would like to ask: what is more advisable, to focus on 1 dataset and 1 architecture and try to reproduce results as close as possible; or to focus more on different architectures and datasets trying to reproduce as many results as possible? We ask it since we tried multiple versions of WGANs but the ones proposed by the authors does not seem to work. Now it is unclear whether we should keep focusing on 1 particular architecture trying to fix it or not.
If their architecture of Generator (encoder) is not working efficiently enough is it reasonable to modify the architecture for encoder to achieve better results at obfuscation task? Wouldn't it go against what the paper does? Or is it advisable to leave it as it is, and to consider the results to be poorly reproducable?
I think it would be better to try multiple datasets if you're not able to reproduce the scores.
In principle, yes, you should somewhat modify their architecture to get the model to perform as reported. But if that is a huge time constraint it would be better to leave it as-is and report that using exactly their architecture does not give the reported scores.
Hi Micha,
Right now we want to clarify certain things due to time constraints:
Looking forward to your reply! @deZakelijke