Open adnansirajrakin opened 6 years ago
You can replace the last line with onehotattackimg = self.encoder.onehot(attackimg) if you want to generate one-hot encoding. However, I think you should also remove this line if your model is trained with onehot encoding image:
z0, z1, z2 = (torch.cumsum(z, dim=1) for z in [z0, z1, z2])
In the paper they have report this encoding accuracy, However, thermometer encoding performs better than that.
Thank you. but One hot encoding performs better for mnist as they reported right now i am trying it on mnist.
From: Flag-C notifications@github.com Sent: Wednesday, December 6, 2017 12:17:25 AM To: Flag-C/ThermometerEncoding Cc: Adnan Siraj Rakin; Author Subject: Re: [Flag-C/ThermometerEncoding] Removing Thermometer encoding from the attack generation and defense. (#1)
You can replace the last line with onehotattackimg = self.encoder.onehot(attackimg) if you want to generate one-hot encoding. However, I think you should also remove this line if your model is trained with onehot encoding image: z0, z1, z2 = (torch.cumsum(z, dim=1) for z in [z0, z1, z2])
In the paper they have report this encoding accuracy, However, thermometer encoding performs better than that.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FFlag-C%2FThermometerEncoding%2Fissues%2F1%23issuecomment-349393132&data=02%7C01%7Cadnanrakin%40knights.ucf.edu%7C360e6b73a5a348d2ae5508d53c0ca6a7%7C5b16e18278b3412c919668342689eeb7%7C0%7C0%7C636480947386035632&sdata=CXXiiKZL6Mk8ECvuuWjodJlHbbNfMumsAAEZ4dIpNwo%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAeNR0ZBJRDq_5ZdZ0sO9IKNXEVTEv6WLks5s9Yi1gaJpZM4Q2uDo&data=02%7C01%7Cadnanrakin%40knights.ucf.edu%7C360e6b73a5a348d2ae5508d53c0ca6a7%7C5b16e18278b3412c919668342689eeb7%7C0%7C0%7C636480947386035632&sdata=uxhHJzRLH2qKA9NTJlyXueAf4yNf6vZbjds%2FEgQZK0A%3D&reserved=0.
If i delete the last line from the attack generation should it become only onehot coding? In the paper they also reported this (without themometer) encoding accuracy?