hongshuochen / DefakeHop

Official code for DefakeHop: A Light-Weight High-Performance Deepfake Detector
https://arxiv.org/abs/2103.06929
70 stars 24 forks source link

About Model Size / Saving Model #7

Open Neural-Sorcerer opened 2 years ago

Neural-Sorcerer commented 2 years ago

I wanted to ask about model size which you got after training Celab and FF++ datasets. I wanted to save the model and then use it for single prediction, but as I understood for prediction we need to save the "classifier" and "defakeHop" objects. However defakeHop object size depends on the training data. In results I have 10GB defakeHop and 760 KB classifier. May I did something wrong? How would you save the model for the future prediction? If you have some time could you explain it to me?

hongshuochen commented 2 years ago

Hi! This is a good question! I think for some reason I save the features in the model. It should be set to an empty dictionary after training. Maybe I forget to clean it. Therefore, the model size depends on the data size. Let me double-check and get back to you! Thank you!

https://github.com/hongshuochen/DefakeHop/blob/941efb6a3d11b59bf0c4d56c95f75612a9f4da4e/defakeHop.py#L22 https://github.com/hongshuochen/DefakeHop/blob/941efb6a3d11b59bf0c4d56c95f75612a9f4da4e/multi_cwSaab.py#L15

Neural-Sorcerer commented 2 years ago

I found a solution to this problem.

In "multi_cwSaab.py" file. Line 17: self.tmp = [] Line 107: self.tmp.append((saab_id,channel_id,self.max_pooling(output)))

Must be removed as we are simply collecting information for no purpose.

hellogeraldblah commented 1 year ago

Hello! are there any updates on saving the model? and using it to predict individual video files?