weihui1308 / HOTCOLDBlock

Official Pytorch implementation for our AAAI 2023 paper HOTCOLD Block: Fooling Thermal Infrared Detectors with a Novel Wearable Design
Apache License 2.0
33 stars 5 forks source link

About ASR in val_patch.py #10

Closed yyhyyh17 closed 10 months ago

yyhyyh17 commented 10 months ago

in line 291: print('ASR------->', 1-(tp[0]/545)) Does 545 mean the total image number or the total gt boxes number?

weihui1308 commented 10 months ago

Sorry for the confusion. 545 is the TP value of the detector when there is no attack. That is, tp[0] without attack. When you test on a new dataset, you need to replace it.

yyhyyh17 commented 10 months ago

Sorry for the confusion. 545 is the TP value of the detector when there is no attack. That is, tp[0] without attack. When you test on a new dataset, you need to replace it.

Thank you for your reply, great work! I split with the FILR dataset with a 0.9:0.1 train-val rate, and I get 204 labels(tp[0]). I think maybe you spilt it with a 0.75:0.25 or 0.8:0.2 rating. Also, I want to ask that in your paper, how is manual-random block attack (MR) designed? Is it similar as what is described in Figure3?

weihui1308 commented 10 months ago

A1: We filter the original dataset for better fitting to the patch-based adversarial attack, with two conditions of (i) the images contain “person” category, (ii) the bodies of persons in the images have a height of more than 120 pixels. Finally, 1,255 images are available, of which 878 are the training set with 1,366 eligible “person” labels and 377 are the testing set with 598 eligible “person” labels. A2: Compared to the random block attack (R), the manual-random block attack (MR) incorporates manual adjustments to address potential issues associated with complete randomness. The concern with fully random patches is the potential for overlap between them, which undoubtedly diminishes the effectiveness of the attack. Therefore, we deliberately adjust the randomized positions to prevent overlapping.

yyhyyh17 commented 10 months ago

Thank you for answering! Sorry I have another question orz. In Table 1 of your paper, I think one value of "Num of Patches" can produce many different shapes. For examlple, in the case m = 2, there are 36 different shapes. In your code, I noticed that you just get a random shape, not a regular shape of a perticular m value, like Figure3. In Table 1, is the "Num of Patches" generate a perticular shape, like in Figure3, or you get many random shapes and record the best result of them? Thank you!

weihui1308 commented 10 months ago

The shape of the patch is first randomly initialised and then used as a parameter for the optimization. The shape is fixed after optimization is complete.

In addition, the "Num of Patches" is not the number of black blocks in the nine-square-grid, but the number of patches, see Figure 5e, where the "Num of Patches" is 4.

yyhyyh17 commented 10 months ago

Thank you for your answers! And in pso.py:

for iteration in range(self.max_iterations):
            # --- Set PBest
            for particle in self.swarm:
                fitness_candidate = self.fitness_function.evaluate(particle.position)
                #break
                if (particle.pbest_value > fitness_candidate):
                    particle.pbest_value = fitness_candidate
                    particle.pbest_position[0] = particle.position[0].clone()
                    particle.pbest_position[1] = particle.position[1].clone()
            # --- Set GBest
            for particle in self.swarm:
                best_fitness_candidate = self.fitness_function.evaluate(particle.position)
                if self.gbest_value > best_fitness_candidate:
                    self.gbest_value = best_fitness_candidate
                    self.gbest_position[0] = particle.position[0].clone()
                    self.gbest_position[1] = particle.position[1].clone()
                    self.gbest_particle = copy.deepcopy(particle)

Why update GBest in another loop instead of updating it in the same loop as Pbest? Can I update them together in one loop? This may save much time.

weihui1308 commented 10 months ago

This question involves PSO optimisation algorithm. It is recommended that you expand the knowledge about it.

yyhyyh17 commented 10 months ago

This question involves PSO optimisation algorithm. It is recommended that you expand the knowledge about it.

I know PSO optimization algorithm and I learned some other implementation about PSO. Maybe I didn't express my question clearly.

for iteration in range(self.max_iterations):
            # --- Set PBest and Gbest
            for particle in self.swarm:
                fitness_candidate = self.fitness_function.evaluate(particle.position)
                #break
                if (particle.pbest_value > fitness_candidate):
                    particle.pbest_value = fitness_candidate
                    particle.pbest_position[0] = particle.position[0].clone()
                    particle.pbest_position[1] = particle.position[1].clone()
                    if self.gbest_value > fitness_candidate:
                          self.gbest_value = fitness_candidate
                          self.gbest_position[0] = particle.position[0].clone()
                          self.gbest_position[1] = particle.position[1].clone()
                          self.gbest_particle = copy.deepcopy(particle)

If I change the code as above, is it right? I want to do so because I think that evaluate function costs much time.

weihui1308 commented 10 months ago

I think it's OK. You can experiment with it. This should improve the efficiency of the algorithm.

By the way, if this modification is correct, please give feedback. Thanks.