emadeldeen24 / TS-TCC

[IJCAI-21] "Time-Series Representation Learning via Temporal and Contextual Contrasting"
MIT License
372 stars 99 forks source link

About training a new dataset #14

Closed xuchunyu123 closed 1 year ago

xuchunyu123 commented 1 year ago

Thank you for your work. If I want to input a custom data set, how do I configure the network parameters? My data set size is 1189, 1, 10000 (data set size, data dimension, data length), looking forward to your reply

NithyasreeVP commented 1 year ago

Just want to add something. In case of an unlabelled custom data set, what training mode should I use? Could not find unsupervised in the list of training modes.

I really appreciate any help you can provide.

emadeldeen24 commented 1 year ago

@xuchunyu123 I believe that you are asking about the base_Model configurations. First, you can use your own custom base model instead of ours (maybe something related to your data from the literature).

If you want to use ours, you have to first create a configuration file here with the dataset name. For that, you can copy the contents from any available configuration file, but you need to modify it a bit according to your dataset settings.

For example, you will update the self.input_channels = 1. Also, I think since you have a 10k sample length, you would need to use a big self.kernel_size and probably a big self.stride, so you can test that by setting the training model as supervised and see how the model performs. You will need to update self.features_len accordingly or just use an adaptiveAvgPooling layer to have a fixed length. You also need to update the self.num_classes.

For the augmentations parameters, it would depend on the signal itself and if it is noisy not. For self.timesteps in the temporal contrasting module, you would need to set it around 40% of self.features_len as recommended by our experiments.

emadeldeen24 commented 1 year ago

@NithyasreeVP The unsupervised mode is self_supervised, which trains the model without any labels.