mattans / AgeProgression

133 stars 39 forks source link

'LossTracker' object has no attribute 'graphic' #13

Open jilner opened 6 years ago

jilner commented 6 years ago

hello when i run main.py there is a problem in utils line 220 in plot if self.graphic: AttributeError: 'LossTracker' object has no attribute 'graphic'

what should I do ? thank you very much

mattans commented 6 years ago

Hi, I think you are you using branch master which is outdated and unstable. Please try tag 1.0.0 and report again.

jilner commented 6 years ago

Hi, I think you are you using branch master which is outdated and unstable. Please try tag 1.0.0 and report again.

hi thank you for reply,I try tag 1.0.0, but there come new problem

File "", line 1, in runfile('C:/Users/lenovo/Desktop/cvpr2018/Age ProgressionRegression by Conditional Adversarial Autoencoder/AgeProgression-1.0.0-guanfang-pytorch_new/AgeProgression-1.0.0/main.py', wdir='C:/Users/lenovo/Desktop/cvpr2018/Age ProgressionRegression by Conditional Adversarial Autoencoder/AgeProgression-1.0.0-guanfang-pytorch_new/AgeProgression-1.0.0')

File "C:\Users\lenovo\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile execfile(filename, namespace)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/lenovo/Desktop/cvpr2018/Age ProgressionRegression by Conditional Adversarial Autoencoder/AgeProgression-1.0.0-guanfang-pytorch_new/AgeProgression-1.0.0/main.py", line 129, in models_saving=args.models_saving

File "C:\Users\lenovo\Desktop\cvpr2018\Age ProgressionRegression by Conditional Adversarial Autoencoder\AgeProgression-1.0.0-guanfang-pytorch_new\AgeProgression-1.0.0\model.py", line 417, in teach d_z_prior = self.Dz(z_prior)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in call result = self.forward(*input, **kwargs)

File "C:\Users\lenovo\Desktop\cvpr2018\Age ProgressionRegression by Conditional Adversarial Autoencoder\AgeProgression-1.0.0-guanfang-pytorch_new\AgeProgression-1.0.0\model.py", line 96, in forward out = layer(out)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in call result = self.forward(*input, **kwargs)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\modules\container.py", line 91, in forward input = module(input)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 477, in call result = self.forward(*input, **kwargs)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\modules\batchnorm.py", line 66, in forward exponential_average_factor, self.eps)

File "C:\Users\lenovo\Anaconda3\lib\site-packages\torch\nn\functional.py", line 1251, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))

ValueError: Expected more than 1 value per channel when training, got input size [1, 64]

ajaysimhasr commented 5 years ago

Hello,

I am running your code on Ubuntu 16.0.4.

Even I got the above latest error, below is my python rum command for train

$ python main.py --mode train --epochs 50

Error log: (error appeared after running for like more than 3 hours)

Data folder is ./data/UTKFace Results folder is ./trained_models/2018_12_19/16_56 ./data/UTKFace

Traceback (most recent call last): File "main.py", line 129, in models_saving=args.models_saving File "/home/codas/Documents/Ajay/MITA_2018_Capstone/AgeProgression/model.py", line 417, in teach d_z_prior = self.Dz(z_prior) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, kwargs) File "/home/codas/Documents/Ajay/MITA_2018_Capstone/AgeProgression/model.py", line 96, in forward out = layer(out) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, *kwargs) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(input, kwargs) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 76, in forward exponential_average_factor, self.eps) File "/home/codas/anaconda3/envs/MITA_2018_Capstone/lib/python3.6/site-packages/torch/nn/functional.py", line 1619, in batch_norm raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size)) ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 64])

Is there any solution for this?

mattans commented 5 years ago

@jilner @ajaysimhasr I think this is a pytorch bug, with the minibatch size you choose, maybe the last step contains only 1 image - thus batch norm fails. I trained with minibatch sizes 32, 64, and 128, and I didn't encounter it, so try it too