mackelab / delfi

Density estimation likelihood-free inference. No longer actively developed see https://github.com/mackelab/sbi instead
http://www.mackelab.org/delfi
Other
71 stars 28 forks source link

Theano type conversion error in CDELFI #39

Closed fabioramos closed 5 years ago

fabioramos commented 5 years ago

Thank you for this excellent implementation of Papamakarios and Murray's paper on likelihood free inference. I'm having an issue running the code with more than two components, for example:

from delfi.inference import Basic, CDELFI, SNPE
inf_basic = CDELFI(generator=g, obs=x_test.reshape(1,-1), n_components=2, n_hiddens=[24], svi=False)
log, train_data, _ = inf_basic.run(n_train=1000, epochs=200)

Returns: TypeError: ('GpuArrayType<None>(float32, matrix) cannot store a value of dtype float64 without risking loss of precision.', 'Container name "None"')

This does not happen for only one component in the mixture. Any ideas?

Thanks!

dgreenberg commented 5 years ago

Hi, could you please provide the full code to reproduce the error? In particular where g and x_test come from?

fabioramos commented 5 years ago

My simulator is more complicated but even the standard simulator creates the error:

from delfi.simulator import GaussMixture

n_params = 1 m = GaussMixture(dim=n_params)

import delfi.distribution as dd import numpy as np p = dd.Uniform(lower=[-10], upper=[10])

from delfi.summarystats import Identity s = Identity()

from delfi.generator import Default g = Default(model=m, prior=p, summary=s)

params, stats = g.gen(500) xo = np.array([[0.]])

from delfi.inference import Basic, CDELFI

inf_basic = CDELFI(generator=g, obs=xo, n_components=5, prior_norm = False, n_hiddens=[24, 24], svi=True)

log, traindata, = inf_basic.run(n_train=1000)


TypeError Traceback (most recent call last)

in () 27 n_hiddens=[24, 24], svi=True) 28 ---> 29 log, train_data, _ = inf_basic.run(n_train=1000) ~/.local/lib/python3.5/site-packages/delfi-0.5.1-py3.5.egg/delfi/inference/CDELFI.py in run(self, n_train, n_rounds, epochs, minibatch, monitor, **kwargs) 180 print(old_params) 181 print(self.network.params_dict) --> 182 self.network.params_dict = old_params 183 184 trn_inputs = [self.network.params, self.network.stats] ~/.local/lib/python3.5/site-packages/delfi-0.5.1-py3.5.egg/delfi/neuralnet/NeuralNet.py in params_dict(self, pdict) 327 for p in self.aps: 328 if str(p) in pdict.keys(): --> 329 p.set_value(pdict[str(p)]) 330 331 @property ~/.local/lib/python3.5/site-packages/theano/gpuarray/type.py in set_value(self, value, borrow) 670 value = pygpu.gpuarray.array(value, copy=(not borrow), 671 context=self.type.context) --> 672 self.container.value = value 673 674 def __getitem__(self, *args): ~/.local/lib/python3.5/site-packages/theano/gof/link.py in __set__(self, value) 475 self.storage[0] = self.type.filter_inplace(value, 476 self.storage[0], --> 477 **kwargs) 478 else: 479 self.storage[0] = self.type.filter(value, **kwargs) ~/.local/lib/python3.5/site-packages/theano/gpuarray/type.py in filter_inplace(self, data, old_data, strict, allow_downcast) 281 raise TypeError("%s cannot store a value of dtype %s " 282 "without risking loss of precision." % --> 283 (self, data.dtype)) 284 285 if self.ndim != data.ndim: TypeError: ('GpuArrayType(float32, col) cannot store a value of dtype float64 without risking loss of precision.', 'Container name "None"') In [ ]: On Tue, Feb 5, 2019 at 5:27 AM dgreenberg > wrote: Hi, could you please provide the full code to reproduce the error? In particular where g and x_test come from? — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
jan-matthis commented 5 years ago

params, stats, and xo are of dtype np.float64. Computing with float64 on the GPU backend is tricky. You'll get rid of the error by converting your arrays to np.float32 or using the floatX setting (see below).

The backend supports all regular theano data types (float32, float64, int, ...), however GPU support varies and some units can’t deal with double (float64) or small (less than 32 bits like int16) data types. You will get an error at compile time or runtime if this is the case.

The more float32, the better GPU performance you will get.

Consider adding floatX=float32 (or the type you are using) to your .theanorc file if you plan to do a lot of GPU work.

See: http://deeplearning.net/software/theano/tutorial/using_gpu.html