VDIGPKU / T-SEA

[CVPR 2023] T-SEA: Transfer-based Self-Ensemble Attack on Object Detection
90 stars 10 forks source link

Evaluation issues after training custom datasets #6

Closed zhixuanjiaxin closed 1 year ago

zhixuanjiaxin commented 1 year ago

In order to evaluate the methods in the paper, the corresponding detection label files for each detector are required. The labels corresponding to the three types of pedestrian datasets in the source code are all provided directly (as shown in the figure below). For the evaluation of custom datasets, there is no corresponding detection label for each detector, and the evaluation work cannot be carried out.

I would like to inquire about how to generate label files for each type of detector evaluation in the experiment, and whether there is a corresponding target detection model library that does not require pre training weights to train the dataset from scratch to obtain the final label file. I have conducted research on relevant object detection libraries that require pre trained models, and not all of the object detection models in the experiment are included. So please advise on how to obtain label files for custom datasets evaluated by each detector. mmexport1681819267506

ziyannchen commented 1 year ago

Hi, thanks for reaching out. For evaluating custom datasets, please check this script and README to generate label files in a correct format. This label-generating process does involve a pre-trained target detection model. We have provided the model API of the models shown in your figure.

zhixuanjiaxin commented 1 year ago

Can I still use the pre trained weight file you provided when generating detection labels for different detectors using gen_det_label.py file? Will this affect the accuracy of my evaluation results?

Do I still need to train each model myself from scratch using a custom dataset, replacing the pre training weights you provided?

I am still a bit unclear about the relationship between them. Could you please explain it to me? Thank you

ziyannchen commented 1 year ago

For custom datasets, if the object target is already included in the object class list of the target detector (like person class is already in that of a detector trained in COCO-80), then you can use the provided model directly. Otherwise, you should provide your own detection model. The key is to ensure you can get the bounding box of the attack object based on the detection model.

And gen_det_label.py script allows you to obtain detection labels of any model provided, but you need to write your own detector API script if you use a custom detector (like the api.py provided in our detection lib). If you use the provided model, you can run the gen_det_label.py script directly.

Note that our repo now only supports a single-class attack, a multiple-class attack might be incompatible with our codebase.

zhixuanjiaxin commented 1 year ago

The category to be attacked by the custom dataset is also 'person', but it is not a visible light image, it is an infrared image. Does this situation require me to retrain the detector (such as yolov2, faster-rcnn) to obtain the corresponding weight file to replace the weight file you provided? If it is training from scratch, in my understanding ,api.py can only achieve testing and evaluation, not training.

ziyannchen commented 1 year ago

Can the person in an infrared image be detected normally and does the detection accuracy reaches your expectation? If yes, then you can directly use the provided pre-trained model and all the other provided stuff. If not, you have to replace the weights with your own custom pre-trained model weights file. In that situation, if you still use the detector architecture which is supported by our repo (like yolov2 and faster-rcnn you have mentioned), then the provided scripts can still work for you.

And unfortunately, our repo does not support detector training. You may refer to other detector repo if you need.

zhixuanjiaxin commented 1 year ago

I understand that it should be necessary to retrain the network to obtain weights.

I would also like to ask how you obtained the pre training weights in the experiment.

The workload of retraining each detector is significant. When reviewing relevant materials, we did not find a model library that includes all the detection models in the experiment and can be trained. Subsequent work may be difficult to carry out.

ziyannchen commented 1 year ago

I'm sorry to tell you that in our work, we just directly employed the model released in every original model repo, and it does take effort for us to establish our detection library from many different repos.

To re-train so many different detectors can be exactly a lot of work as you mentioned. In fact, getting many different target models can be really difficult work to do. It is also one of the problems in the attack field for the attacker, which is exactly what our work T-SEA trying to solve for a more efficient attack deployment (when you can't really obtain so many white-box models). :)

Maybe it will be easier for you to train a small number of these models, and then train an adversarial patch by ensembling these white-box models and employing self-ensembling strategies from T-SEA. The model-ensembling and self-ensembling strategies will help the patch to obtain higher transferability to other unknown black-box models.

However, you may have to be able to access the target white/black-box model (at least being able to get the detection outputs) if you want to evaluate exactly the attack performance on the model.

zhixuanjiaxin commented 1 year ago

Thank you very much for your answer. I will continue to try the results of different detectors.

Thank you again for your answer!

ziyannchen commented 1 year ago

Good luckkk! :)