Closed akapet00 closed 3 years ago
Hi there,
thanks for reading out! I had a brief look at your notebook, I do not think that you are doing anything wrong. A few thoughts:
1) Both SNPE-B and SNPE-C can "learn informative features in high-dimensional data". Also, the mechanism with which they do this is identical (by adding layers at the beginning of the neural net). So, there's no difference there. Aside: SNPE-C and SNPE-B are identical if you do only one "round" (i.e. if you do not do what is described here )
2) if you specify embedding_net=None
, the embedding network is basically a multi-layer-perceptron (MLP) defined here. So, if this MLP is sufficient to learn informative features of your data, then it will work -- I think this is what is happening in your case.
I hope that helps! Michael
Yes, it helps, thank you very much. I still have a few questions.
Thank you again.
Best, Ante
density_estimator = posterior_nn("mdn", z_score_x=False)
Best Michael
4. I have not done this, but you might want to have a look at this
It is so good to see a paper with <15 pages for a change. I'll check it out!
Thanks for all the information, really helpful! Feel free to close this issue now since I am all out of questions :smile:
Best, Ante
Hi,
In [Goncalves et al., 2020] it is stated that: "SNPE can be applied to, and might benefit from the use of summary features, but it also makes use of the ability of neural networks to automatically learn informative features in high-dimensional data. Thus, SNPE can also be applied directly to raw data (e.g. using recurrent neural networks [Lueckmann et al., 2017]), ...". The work by [Lueckmann et al., 2017] is related to the method SNPE_B, at least according to
sbi
documentation, however, in the version ofsbi
I am currently using (0.16.0), is stated that mentioned inference algorithm in currently not implemented.Nevertheless, I have been playing around with SNPE (or, more precisely, SNPE_C) and raw data and it seems to work quite well for a very simple example similar to that in
brian2
official example directory, available here. In this example, the Hodgkin-Huxley neuron model is used to test the ability of simulation-based inference and the possibility of integration withbrian2
. It is based on a fake current-clamp recording generated from the same model that has been used in the inference process. Two of the parameters (the maximum sodium and potassium conductivity values) are considered unknown and are proceed to be inferred from the data.The first thing I tried is using embedding network as the way to semi-automatically extract relevant features. This embedding network is based on Time2Vec, which is, in a nutshell, a very simple sinusoidal layer:
and according to r/MachineLearning commentators is nothing but a "quality case of 'just throw neural networks at it' and is overall just a shitty rehashing of discrete Fourier transforms". In the original paper for Time2Vec authors use this sine representation of the input just as the additional layer to LSTM or GRU and it seems to produce better results than vanilla reccurent networks, but in the case I've been working on, it does not seem to work well.
Next approach was applying raw data output (generated voltage traces), x, of size (10000, 7000) to SNPE. It works extremely well and is comparable (if not better) than the situation where I have used summary statistics consisted of mean and std of the active potential, number of spikes and maximum value of the membrane potential from generated traces. The thing that I do not understand is, how is this possible? Am I doing something wrong or this SNPE_C approach is able to automatically extract features from the data even though
embedding_net
is still set toNone
. In [Lueckmann et al., 2017], in subsection 2.3. under Learning Features, it is stated that in the cases when time-series recordings are directly fed into the network, the first layer of the MDN becomes a recurrent layer instead of a fully connected one. But even with different methods, such as NSF for example, I've been able to obtain good results even though much slower. The notebook is available here.Sorry for this long text :grimacing:
refs. Goncalves et al. eLife 2020; 9:e56261 available online Lueckmann et al. in proceeding of NIPS 2017 available online