timeseriesAI / tsai

Time series Timeseries Deep Learning Machine Learning Python Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai
https://timeseriesai.github.io/tsai/
Apache License 2.0
5.21k stars 651 forks source link

how to get embedding space #560

Closed ramdhan1989 closed 1 year ago

ramdhan1989 commented 2 years ago

Hi, I am exploring self-supervised using my data and code from this link

After trained the model using unlabeled data, how to get embedding space? I plan to loop images one by one and pass it thru the network and save the embedding space.

thank you Regards, Ramdhan

ramdhan1989 commented 2 years ago

I try this : learn.model.eval()(torch.Tensor(np.array(arr[0].reshape(1,1,-1))).cuda()) but got this error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Input In [195], in <cell line: 1>()
----> 1 learn.model.eval()(torch.Tensor(np.array(arr[0].reshape(1,1,-1))).cuda())

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\module.py:1102, in Module._call_impl(self, *input, **kwargs)
   1098 # If we don't have any hooks, we want to skip the rest of the logic in
   1099 # this function, and just call forward.
   1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102     return forward_call(*input, **kwargs)
   1103 # Do not call functions when jit is used
   1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\container.py:141, in Sequential.forward(self, input)
    139 def forward(self, input):
    140     for module in self:
--> 141         input = module(input)
    142     return input

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\module.py:1102, in Module._call_impl(self, *input, **kwargs)
   1098 # If we don't have any hooks, we want to skip the rest of the logic in
   1099 # this function, and just call forward.
   1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102     return forward_call(*input, **kwargs)
   1103 # Do not call functions when jit is used
   1104 full_backward_hooks, non_full_backward_hooks = [], []

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\container.py:141, in Sequential.forward(self, input)
    139 def forward(self, input):
    140     for module in self:
--> 141         input = module(input)
    142     return input

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\module.py:1120, in Module._call_impl(self, *input, **kwargs)
   1117     bw_hook = hooks.BackwardHook(self, full_backward_hooks)
   1118     input = bw_hook.setup_input_hook(input)
-> 1120 result = forward_call(*input, **kwargs)
   1121 if _global_forward_hooks or self._forward_hooks:
   1122     for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):

File ~\Anaconda3\envs\SiT\lib\site-packages\tsai\models\InceptionTimePlus.py:89, in InceptionBlockPlus.forward(self, x)
     87 for i in range(self.depth):
     88     if self.keep_prob[i] > random.random() or not self.training:
---> 89         x = self.inception[i](x)
     90     if self.residual and i % 3 == 2:
     91         res = x = self.act[i//3](self.add(x, self.shortcut[i//3](res)))

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\module.py:1120, in Module._call_impl(self, *input, **kwargs)
   1117     bw_hook = hooks.BackwardHook(self, full_backward_hooks)
   1118     input = bw_hook.setup_input_hook(input)
-> 1120 result = forward_call(*input, **kwargs)
   1121 if _global_forward_hooks or self._forward_hooks:
   1122     for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):

File ~\Anaconda3\envs\SiT\lib\site-packages\tsai\models\InceptionTimePlus.py:54, in InceptionModulePlus.forward(self, x)
     52 input_tensor = x
     53 x = self.bottleneck(x)
---> 54 x = self.concat([l(x) for l in self.convs] + [self.mp_conv(input_tensor)])
     55 x = self.norm(x)
     56 x = self.conv_dropout(x)

File ~\Anaconda3\envs\SiT\lib\site-packages\torch\nn\modules\module.py:1123, in Module._call_impl(self, *input, **kwargs)
   1121 if _global_forward_hooks or self._forward_hooks:
   1122     for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):
-> 1123         hook_result = hook(self, input, result)
   1124         if hook_result is not None:
   1125             result = hook_result

File ~\Anaconda3\envs\SiT\lib\site-packages\torchsummary\torchsummary.py:19, in summary.<locals>.register_hook.<locals>.hook(module, input, output)
     17 m_key = "%s-%i" % (class_name, module_idx + 1)
     18 summary[m_key] = OrderedDict()
---> 19 summary[m_key]["input_shape"] = list(input[0].size())
     20 summary[m_key]["input_shape"][0] = batch_size
     21 if isinstance(output, (list, tuple)):

AttributeError: 'list' object has no attribute 'size'

here I use input 1,1,202 (batch, channel, seq) please advice thank you

oguiza commented 1 year ago

Hi @ramdhan1989, Sorry for my very late reply. I'm afraid I can't reproduce the issue. If you are still interested, could you please provide a code snippet that reproduces it?

oguiza commented 1 year ago

I'll close this issue due to the lack of response.

ramdhan1989 commented 1 year ago

hi, I apologize for late response. This is what I did to get embedding.

learnss = ts_learner(udls100, InceptionTimePlus, pretrained=True, weights_path=f'/MVP/{dsid}_1000.pth')
net = nn.Sequential(learnss.model.backbone,list(list(learnss.model.head.children())[0].children())[:-1][0])
vectors_trn = []
for i in range(x_train.shape[0]):
    out  = net(torch.Tensor(np.expand_dims(x_train[i],axis=0)).cuda()).flatten().cpu().detach().numpy()
    vectors_trn.append(out)

vectors_val = []
for i in range(x_test.shape[0]):
    out  = net(torch.Tensor(np.expand_dims(x_test[i],axis=0)).cuda()).flatten().cpu().detach().numpy()
    vectors_val.append(out)

vectors_val = np.asarray(vectors_val)
vectors_trn = np.asarray(vectors_trn)
oguiza commented 1 year ago

No problem. And thanks for documenting your approach.