BloodAxe / Kaggle-2020-Alaska2

MIT License
23 stars 5 forks source link

Support In understanding certain aspect of this work #7

Open avishka40 opened 3 years ago

avishka40 commented 3 years ago

Hi all , I am doing a research on image steganalysis I stumbled on this work while analysing ALASKA compeititon , really nice work I have being having certain trouble in understanding bit of the work you have done here. Is it possible to contact you to understand these bits.

Thanks in advance

BloodAxe commented 3 years ago

Feel free to post you question here as an issue.

Пн, 8 февр. 2021 г. в 16:11, avishka40 notifications@github.com:

Hi all , I am doing a research on image steganalysis I stumbled on this work while analysing ALASKA compeititon , really nice work I have being having certain trouble in understanding bit of the work you have done here. Is it possible to contact you to understand these bits.

Thanks in advance

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/BloodAxe/Kaggle-2020-Alaska2/issues/7, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEB6YAXO7DKNR3WLV3BHWTS57WKPANCNFSM4XJCDAOA .

avishka40 commented 3 years ago

Sorry for the late reply, I would like to know how you have used DCTR as a feature extractor here? I can see that you have decompressed the image to ycrcb format, is there anything more to it.

And, during the surgery process, how did the forward function logic was generated? This question is more likely because of my lack of understanding in some concepts in ML feel free to correct me.

YassineYousfi commented 3 years ago

The DCTR (and JRM) feature extractor is used in abba/rich_models/ we extract the DCTR features for the entire dataset and train an FLD ensemble for each quality factor. We use the ensemble in abba/predict/predict_folder_richmodels.py to generate the "votes" of the ensemble on the holdout sets. These votes are stacked with the other models outputs and used to train a second level stacking model abba/train/stacking/skopt_catboost.py

The "surgeries" in abba/train/zoo/surgery.py were mainly used to replace the swish activation function with the mish activation, and potentially to use the inplaceABN layers instead of BN. The other functions were not used during the competition but added late for our paper submission to the WIFS2020 conference