This PR allows the user to pass the device which can be either cpu or gpu (or cuda which is identical to passing gpu) to infer and subsequently to init_inference methods in Inferencer.
This feature targets advanced users that won't use default embedding network for automatic feature extraction, rather will customize and pass their own embedding networks that take advantage of convolutions or some other procedures that run much faster on GPU. For regular user, changing device on which torch will operate, won't do much.
Example: automatic feature extraction by using custom CNN embedding network, ref. Rodrigues, P. L. C. and Gramfort, A. Learning summary features of time series for likelihood free inference, in proceedings of the Third Workshop on Machine Learning and the Physical Sciences (NeurIPS 2020).
from brian2 import *
from brian2modelfitting import *
from torch import nn
import torch
import time
inp_trace = load('../data/input_traces_sim.npy')
out_trace = load('../data/output_traces_sim.npy')
Resolves #62.
This PR allows the user to pass the device which can be either
cpu
orgpu
(orcuda
which is identical to passinggpu
) toinfer
and subsequently toinit_inference
methods inInferencer
.This feature targets advanced users that won't use default embedding network for automatic feature extraction, rather will customize and pass their own embedding networks that take advantage of convolutions or some other procedures that run much faster on GPU. For regular user, changing device on which torch will operate, won't do much.
Example: automatic feature extraction by using custom CNN embedding network, ref. Rodrigues, P. L. C. and Gramfort, A. Learning summary features of time series for likelihood free inference, in proceedings of the Third Workshop on Machine Learning and the Physical Sciences (NeurIPS 2020).
Data traces have been generated from the simulator as defined in: https://github.com/brian-team/brian2/blob/master/examples/advanced/modelfitting_sbi.py#L33-L91:
Parameters and model:
Inferencer instantiation without
features
argument -- automatic feature extraction will be performed:Custom embedding network as defined in the ref. previously outlined:
Comparison between GPU and CPU training time. Let's start with GPU:
And now, let's do the same but instead with CPU:
So, in this case GPU clearly wins.