Hi, and thanks for your incredible work! I encountered an issue while attempting to replicate the training and testing steps of Gram.
I used the following commands:
However, I ran into the following error at the beginning of testing: "[ERROR] model.load_state_dict() error".
After reviewing eval_all.py, it turns out that the script encounters a problem with the line of model.load_state_dict(state_dict['netC'], strict=True). Actually, the 'netC' parameters are present in the pretrained weights files ./weights/Gram.pth provided in your project. Yet, the auto-saved model_epoch_best.pth during GramNet's training phrase only generates keys listed as ['model', 'optimizer', 'total_steps'], where 'netC' is absent from the 'model' dictionary.
Is there anything wrong with the commands I used, or is there any modification needed in my opts to resolve this discrepancy?
Hi, and thanks for your incredible work! I encountered an issue while attempting to replicate the training and testing steps of Gram. I used the following commands:
For training:
python train.py --name Gram-Net_test1 --dataroot /root/AIGCdata --detect_method Gram --blur_prob 0.1 --blur_sig 0.0,3.0 --jpg_prob 0.1 --jpg_method cv2,pil --jpg_qual 30,100
For evaluation:
python eval_all.py --model_path ./checkpoints/Gram-Net_test1/model_epoch_best.pth --detect_method Gram --noise_type blur --blur_sig 1.0 --no_resize --no_crop --batch_size 1
However, I ran into the following error at the beginning of testing:
"[ERROR] model.load_state_dict() error"
. After reviewingeval_all.py
, it turns out that the script encounters a problem with the line ofmodel.load_state_dict(state_dict['netC'], strict=True)
. Actually, the 'netC' parameters are present in the pretrained weights files./weights/Gram.pth
provided in your project. Yet, the auto-savedmodel_epoch_best.pth
during GramNet's training phrase only generates keys listed as ['model', 'optimizer', 'total_steps'], where 'netC' is absent from the 'model' dictionary.Is there anything wrong with the commands I used, or is there any modification needed in my opts to resolve this discrepancy?