Hi, Andrew,
Thanks for your previous help. Now I have moved forward to infer the unmeasured input from my dataset.
But if I only set:
ext_input_dim: 1
in architecture, there is an error:
File "/home/datadisk1/wxy/lfads-torch/lfads_torch/modules/decoder.py", line 65, in forward ci_step, ext_input_step = torch.split(input, self.input_dims, dim=1)RuntimeError: split_with_sizes expects split_sizes to sum exactly to 200 (input tensor's size at dimension 1), but got split_sizes=[200, 1]
Hi, Andrew, Thanks for your previous help. Now I have moved forward to infer the unmeasured input from my dataset.
But if I only set:
ext_input_dim: 1
inarchitecture
, there is an error:File "/home/datadisk1/wxy/lfads-torch/lfads_torch/modules/decoder.py", line 65, in forward ci_step, ext_input_step = torch.split(input, self.input_dims, dim=1)
RuntimeError: split_with_sizes expects split_sizes to sum exactly to 200 (input tensor's size at dimension 1), but got split_sizes=[200, 1]
Where do I need to adjust further?