yyk-wew / F3Net

Pytorch implementation of F3Net (ECCV 2020 F3Net: Frequency in Face Forgery Network)
154 stars 18 forks source link

RuntimeError #16

Open ss880426 opened 2 years ago

ss880426 commented 2 years ago

Excuse me, I got this error after training on 2070 super, how can I solve it?

Augment True! Augment True! Augment True! Augment True! Augment True! [2022-04-28 17:51:57,044][DEBUG] No 0 Traceback (most recent call last): File "train.py", line 94, in loss = model.optimize_weight() File "C:\Users\VMLab\Desktop\F3Net\trainer.py", line 36, in optimize_weight stu_fea, stu_cla = self.model(self.input) File "C:\Users\VMLab\Anaconda3\envs\F3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "C:\Users\VMLab\Anaconda3\envs\F3\lib\site-packages\torch\nn\parallel\data_parallel.py", line 165, in forward return self.module(*inputs[0], *kwargs[0]) File "C:\Users\VMLab\Anaconda3\envs\F3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "C:\Users\VMLab\Desktop\F3Net\models.py", line 212, in forward fea_LFS = self.LFS_head(x) File "C:\Users\VMLab\Anaconda3\envs\F3\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\VMLab\Desktop\F3Net\models.py", line 116, in forward y = torch.log10(y + 1e-15) RuntimeError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 8.00 GiB total capacity; 1.24 GiB already allocated; 4.80 GiB free; 1.33 GiB reserved in total by PyTorch)

ss880426 commented 2 years ago

my dataset is

|-- dataset | |-- train | | |-- real | | | |-- c40 | | | | |--000_0.png | | | | |--000_1.png | | | | |-- ...

| | |-- fake | | |-- Deepfakes | | | |-- c40 | | | | |--000_0..png | | | | |--000_1.png | | | | |-- ...

| | |-- Face2Face | | | |-- ... | | |-- FaceSwap | | |-- NeuralTextures | |-- valid | | |-- real | | | |-- ... | | |-- fake | | |-- ... | |-- test | | |-- ...

ss880426 commented 2 years ago

solved, thanks