rdevon / cortex

A machine learning library for PyTorch
BSD 3-Clause "New" or "Revised" License
92 stars 25 forks source link

Cortex crashes with MNIST #190

Closed dmitriy-serdyuk closed 6 years ago

dmitriy-serdyuk commented 6 years ago

I tried a bunch of different models. It seems that the problem is with data handler.

$ cortex VAE --d.source MNIST
/u/serdyuk/.conda/envs/mpy36/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
  warnings.warn(warning.format(ret))
[INFO:cortex]:Setting logging to INFO
EXPERIMENT---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0
[INFO:cortex.exp]:Using CPU
INFO:tornado.access:200 POST /win_exists (127.0.0.1) 0.57ms
[INFO:cortex.exp]:Creating out path `/data/milatmp1/serdyuk/cortex_outs/VAE`
[INFO:cortex.exp]:Setting out path to `/data/milatmp1/serdyuk/cortex_outs/VAE`
[INFO:cortex.exp]:Logging to `/data/milatmp1/serdyuk/cortex_outs/VAE/out.log`
[INFO:cortex]:Saving logs to /data/milatmp1/serdyuk/cortex_outs/VAE/out.log
[INFO:cortex.init]:Ultimate data arguments:
{'batch_size': {'test': 640, 'train': 64},
 'copy_to_local': False,
 'data_args': {},
 'inputs': {'inputs': 'images'},
 'n_workers': 4,
 'shuffle': True,
 'skip_last_batch': False,
 'source': 'MNIST'}
[INFO:cortex.init]:Ultimate model arguments:
{'beta_kld': 1.0,
 'decoder_args': {'output_nonlinearity': 'tanh'},
 'decoder_crit': <function mse_loss at 0x7f54c54aa510>,
 'decoder_type': 'convnet',
 'dim_encoder_out': 1024,
 'dim_out': None,
 'dim_z': 64,
 'encoder_args': {'fully_connected_layers': 1024},
 'encoder_type': 'convnet',
 'vae_criterion': <function mse_loss at 0x7f54c54aa510>}
[INFO:cortex.init]:Ultimate optimizer arguments:
{'clipping': {},
 'learning_rate': 0.0001,
 'model_optimizer_options': {},
 'optimizer': 'Adam',
 'optimizer_options': {},
 'weight_decay': {}}
[INFO:cortex.init]:Ultimate train arguments:
{'archive_every': 10,
 'epochs': 500,
 'eval_during_train': True,
 'eval_only': False,
 'quit_on_bad_values': True,
 'save_on_best': 'losses.classifier',
 'save_on_highest': None,
 'save_on_lowest': 'losses.vae',
 'test_mode': 'test',
 'train_mode': 'train'}
DATA---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
  File "/u/serdyuk/.conda/envs/mpy36/bin/cortex", line 11, in <module>
    load_entry_point('cortex', 'console_scripts', 'cortex')()
  File "/data/milatmp1/serdyuk/projects/cortex/cortex/main.py", line 37, in run
    data.setup(**exp.ARGS['data'])
  File "/data/milatmp1/serdyuk/projects/cortex/cortex/_lib/data/__init__.py", line 56, in setup
    plugin.handle(source, copy_to_local=copy_to_local, **data_args)
  File "/data/milatmp1/serdyuk/projects/cortex/cortex/built_ins/datasets/torchvision_datasets.py", line 157, in handle
    dim_x, dim_y = train_set[0][0].size()
ValueError: too many values to unpack (expected 2)
rdevon commented 6 years ago

191

This was due to torchvision behavior changing with MNIST

rdevon commented 6 years ago

Fixed in #191