CuriousAI / ladder

Ladder network is a deep learning algorithm that combines supervised and unsupervised learning
MIT License
516 stars 142 forks source link

AttributeError: 'numpy.float32' object has no attribute 'owner' #17

Closed peymanr closed 8 years ago

peymanr commented 8 years ago

I am using linux and theano 0.9, python 2.7 (under linux). I get the following attribute error. Any help?

Thanks

THEANO_FLAGS='floatX=float32' python run.py train --encoder-layers 1000-500-250-250-250-10 --decoder-spec gauss --denoising-cost-x 2000,20,0.1,0.1,0.1,0.1,0.1 --labeled-samples 50 --unlabeled-samples 60000 --seed 1 -- mnist_50_full /home/me/ladder/venv2/local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module. "downsample module has been moved to the theano.tensor.signal.pool module.") INFO:main:Logging into results/mnist_50_full1/log.txt INFO:main:== COMMAND LINE == INFO:main:run.py train --encoder-layers 1000-500-250-250-250-10 --decoder-spec gauss --denoising-cost-x 2000,20,0.1,0.1,0.1,0.1,0.1 --labeled-samples 50 --unlabeled-samples 60000 --seed 1 -- mnist_50_full INFO:main:== PARAMETERS == INFO:main: zestbn : bugfix
INFO:main: dseed : 1
INFO:main: top_c : 1
INFO:main: super_noise_std : 0.3
INFO:main: batch_size : 100
INFO:main: dataset : mnist
INFO:main: valid_set_size : 10000
INFO:main: num_epochs : 150
INFO:main: whiten_zca : 0
INFO:main: unlabeled_samples : 60000
INFO:main: decoder_spec : ('gauss',)
INFO:main: valid_batch_size : 100
INFO:main: denoising_cost_x : (2000.0, 20.0, 0.1, 0.1, 0.1, 0.1, 0.1) INFO:main: f_local_noise_std : 0.3
INFO:main: cmd : train
INFO:main: act : relu
INFO:main: lrate_decay : 0.67
INFO:main: seed : 1
INFO:main: lr : 0.002
INFO:main: save_to : mnist_50_full
INFO:main: save_dir : results/mnist_50_full1 INFO:main: commit : 78956cdfc59110b557a759621abf7d391a6f5796 INFO:main: contrast_norm : 0
INFO:main: encoder_layers : ('1000', '500', '250', '250', '250', '10') INFO:main: labeled_samples : 50
INFO:main:Using 0 examples for validation INFO:main.model:Encoder: clean, labeled INFO:main.model: 0: noise 0 INFO:main.model: f1: fc, relu, BN, noise 0.00, params 1000, dim (1, 28, 28) -> (1000,) INFO:main.model: f2: fc, relu, BN, noise 0.00, params 500, dim (1000,) -> (500,) INFO:main.model: f3: fc, relu, BN, noise 0.00, params 250, dim (500,) -> (250,) INFO:main.model: f4: fc, relu, BN, noise 0.00, params 250, dim (250,) -> (250,) INFO:main.model: f5: fc, relu, BN, noise 0.00, params 250, dim (250,) -> (250,) INFO:main.model: f6: fc, softmax, BN, noise 0.00, params 10, dim (250,) -> (10,) INFO:main.model:Encoder: corr, labeled INFO:main.model: 0: noise 0.3 INFO:main.model: f1: fc, relu, BN, noise 0.30, params 1000, dim (1, 28, 28) -> (1000,) INFO:main.model: f2: fc, relu, BN, noise 0.30, params 500, dim (1000,) -> (500,) INFO:main.model: f3: fc, relu, BN, noise 0.30, params 250, dim (500,) -> (250,) INFO:main.model: f4: fc, relu, BN, noise 0.30, params 250, dim (250,) -> (250,) INFO:main.model: f5: fc, relu, BN, noise 0.30, params 250, dim (250,) -> (250,) INFO:main.model: f6: fc, softmax, BN, noise 0.30, params 10, dim (250,) -> (10,) INFO:main.model:Decoder: z_corr -> z_est INFO:main.model: g6: gauss, denois 0.10, dim None -> (10,) INFO:main.model: g5: gauss, denois 0.10, dim (10,) -> (250,) INFO:main.model: g4: gauss, denois 0.10, dim (250,) -> (250,) INFO:main.model: g3: gauss, denois 0.10, dim (250,) -> (250,) INFO:main.model: g2: gauss, denois 0.10, dim (250,) -> (500,) INFO:main.model: g1: gauss, denois 20.00, dim (500,) -> (1000,) INFO:main.model: g0: gauss, denois 2000.00, dim (1000,) -> (1, 28, 28) INFO:main:Found the following parameters: [f_5_b, f_4_b, f_3_b, f_2_b, f_1_b, g_6_a5, f_6_c, f_6_b, g_6_a4, g_6_a3, g_6_a2, g_6_a1, g_6_a10, g_6_a9, g_6_a8, g_6_a7, g_6_a6, g_5_a5, g_5_a4, g_5_a3, g_5_a2, g_5_a1, g_5_a10, g_5_a9, g_5_a8, g_5_a7, g_5_a6, g_4_a5, g_4_a4, g_4_a3, g_4_a2, g_4_a1, g_4_a10, g_4_a9, g_4_a8, g_4_a7, g_4_a6, g_3_a5, g_3_a4, g_3_a3, g_3_a2, g_3_a1, g_3_a10, g_3_a9, g_3_a8, g_3_a7, g_3_a6, g_2_a5, g_2_a4, g_2_a3, g_2_a2, g_2_a1, g_2_a10, g_2_a9, g_2_a8, g_2_a7, g_2_a6, g_1_a5, g_1_a4, g_1_a3, g_1_a2, g_1_a1, g_1_a10, g_1_a9, g_1_a8, g_1_a7, g_1_a6, g_0_a5, g_0_a4, g_0_a3, g_0_a2, g_0_a1, g_0_a10, g_0_a9, g_0_a8, g_0_a7, g_0_a6, f_1_W, f_2_W, f_3_W, f_4_W, f_5_W, f_6_W, g_5_W, g_4_W, g_3_W, g_2_W, g_1_W, g_0_W] INFO:blocks.algorithms:Taking the cost gradient INFO:blocks.algorithms:The cost gradient computation graph is built INFO:main:Balancing 50 labels... INFO:main.nn:Batch norm parameters: f_1_bn_mean_clean, f_1_bn_var_clean, f_2_bn_mean_clean, f_2_bn_var_clean, f_3_bn_mean_clean, f_3_bn_var_clean, f_4_bn_mean_clean, f_4_bn_var_clean, f_5_bn_mean_clean, f_5_bn_var_clean, f_6_bn_mean_clean, f_6_bn_var_clean INFO:main:Balancing 50 labels... INFO:main.nn:Batch norm parameters: f_1_bn_mean_clean, f_1_bn_var_clean, f_2_bn_mean_clean, f_2_bn_var_clean, f_3_bn_mean_clean, f_3_bn_var_clean, f_4_bn_mean_clean, f_4_bn_var_clean, f_5_bn_mean_clean, f_5_bn_var_clean, f_6_bn_mean_clean, f_6_bn_var_clean INFO:blocks.main_loop:Entered the main loop /home/me/ladder/venv2/local/lib/python2.7/site-packages/pandas/core/generic.py:1101: PerformanceWarning: your performance may suffer as PyTables will pickle object types that it cannot map directly to c-types [inferred_type->mixed-integer,key->block0_values] [items->[0]]

return pytables.to_hdf(path_or_buf, key, self, **kwargs) INFO:blocks.algorithms:Initializing the training algorithm ERROR:blocks.main_loop:Error occured during training.

Blocks will attempt to run on_error extensions, potentially saving data, before exiting and reraising the error. Note that the usual after_training extensions will not be run. The original error will be re-raised and also stored in the training log. Press CTRL + C to halt Blocks immediately. Traceback (most recent call last): File "run.py", line 653, in if train(d) is None: File "run.py", line 502, in train main_loop.run() File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/main_loop.py", line 197, in run reraise_as(e) File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/utils/init.py", line 258, in reraise_as six.reraise(type(new_exc), new_exc, orig_exc_traceback) File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/main_loop.py", line 172, in run self.algorithm.initialize() File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/algorithms/init.py", line 128, in initialize self.inputs = ComputationGraph(update_values).inputs File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/graph/init.py", line 74, in init self._get_variables() File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/blocks/graph/init.py", line 125, in _get_variables inputs = graph.inputs(self.outputs) File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/theano/gof/graph.py", line 693, in inputs vlist = ancestors(variable_list, blockers) File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/theano/gof/graph.py", line 672, in ancestors dfs_variables = stack_search(deque(variable_list), expand, 'dfs') File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/theano/gof/graph.py", line 640, in stack_search expand_l = expand(l) File "/home/me/ladder/venv2/local/lib/python2.7/site-packages/theano/gof/graph.py", line 670, in expand if r.owner and (not blockers or r not in blockers): AttributeError: 'numpy.float32' object has no attribute 'owner'

Original exception: AttributeError: 'numpy.float32' object has no attribute 'owner'

linlinzhao commented 8 years ago

I am using Theano 0.9, python 2.7 (OSX), try to run the demo, got exactly the same error.

hotloo commented 8 years ago

Hi, ladder repo is recently updated. Did you have the latest master or a slightly older version?

linlinzhao commented 8 years ago

@hotloo To make sure I have the latest version, I did it again and got the same error

hotloo commented 8 years ago

OK. Did you create a new environment using the environment.yml? What's your Blocks and Fuel version? There was a breaking change on how some initialisations are handled.

hotloo commented 8 years ago

@Linlinzhao PS. The code is only tested with the environment that is created via environment.yml. So, it might be easier if you replicate the environment if possible. Cheers!

linlinzhao commented 8 years ago

Hi @hotloo, great, it runs smoothly after resetting the environment. Thanks a lot!

hotloo commented 8 years ago

Cool. I will close this issue tomorrow! On Wed, 3 Aug 2016 at 19:27, Linlin Zhao notifications@github.com wrote:

Hi @hotloo https://github.com/hotloo, great, it runs smoothly after resetting the environment. Thanks a lot!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/CuriousAI/ladder/issues/17#issuecomment-237238984, or mute the thread https://github.com/notifications/unsubscribe-auth/ABvx8J09q3KyTw-uBZmLEmTkc8VZ9sRdks5qcJ6ngaJpZM4JYbyi .

peymanr commented 8 years ago

Looks to be a Blocks new version problem. Using the stable version of Blocks solves the problem. Thanks

linlinzhao commented 8 years ago

Yes, I think so. I have installed the bleeding edge version of Blocks for python. The environment settings installed the stable version.

tongmuyuan commented 8 years ago

@Linlinzhao hi, could you please show me how to change to the stable version, I GOT a same problem now. with many thanks!!

linlinzhao commented 8 years ago

@tongmuyuan hi, if you follow the settings in environment.yml and activate the environment, you'll have the stable version in this environment regardless of whatever version you have installed in your python site-packages. Hope this helps.