Closed bimo-adiparwa closed 6 days ago
Hello @bimo-adiparwa!
In order to train on your data you will need to make some changes to the code:
Add a new dataset class. Let's assume that your data is organised as follows:
data/mydataset/images/{train,valid,test}
data/mydataset/masks/{train,valid,test}
Here the images
folder contain the locally inpainted images and masks
folders contain the groundtruth locations of the inpainted regions (with 0
denoting inpainted regions and 255
denoting original content). The subfolders train
, valid
, test
split the data into training, validation (development) and test splits, respectively.
To use this dataset, you can to add a new class in data.py, as following
class MyDataset(WithMasksPathDataset):
path_base = Path("data/mydataset")
super().__init__(
path_images=path_base / "images",
path_masks=path_base / "masks",
split=split,
)
Since this class inherits from WithMasksPathDataset
, which in turn inherits from PathDataset
, its instances will have defined methods to load the images and the masks.
Add a new configuration. Let's assume you want to use this dataset to perform fully supervised localisation with the Patch Forensics model. Then in the corresponding training script (in this case, dolos/methods/patch_forensics/train_full_supervision.py), you need to add a new entry in the CONFIG
dictionary:
CONFIG = {
...
"mydataset": {
"last-layer": "block2",
"frontend": None,
"dataset-class": MyDataset,
"load-image": load_image,
"max-epochs": 50,
},
}
and import the MyDataset
:
from dolos.data import (
...
MyDataset,
)
Then you can finally train using the name of the configuration that you have just defined:
python dolos/methods/patch_forensics/train_full_supervision.py mydataset
Hope this helps!
How to test with your provided trained weights? Caused I got some error when I ran this.
In order to run the provided model on your own data, you can do a similar thing: that is, update the prediction configuration dictionary PREDICT_CONFIGS
to specify which datasets to run the script on. Concretely, in patch_forensics/predict.py
you can add entries of the form:
PREDICT_CONFIGS = {
...
"mydataset-valid": {
"dataset": MyDataset("valid"),
},
"mydataset-test": {
"dataset": MyDataset("test"),
},
}
Then you should be able to run the predict script, as follows:
python dolos/methods/patch_forensics/predict.py -s full -t setup-c -p mydataset-valid
python dolos/methods/patch_forensics/predict.py -s full -t setup-c -p mydataset-test
... as well as the evaluation script to get the IoU metric:
python dolos/methods/patch_forensics/evaluate.py -s full -t setup-c -p mydataset
Thank you so much!
what's score is decide is Deepfake or real?
0.5 is real or deepfake?
Larger scores (closer to 1) should correspond to fake regions. Conversely, smaller scores (closer to 0) should correspond to real regions.
Oke thanks, clear.
Hi, thanks for excellent. I really appreciated for your work! But I have problem with running this. please, help me!