lin-shuyu / VAE-LSTM-for-anomaly-detection

We propose a VAE-LSTM model as an unsupervised learning approach for anomaly detection in time series.
427 stars 83 forks source link

Need Help #1

Closed fire-keeper closed 4 years ago

fire-keeper commented 4 years ago

Hello , I have read your paper --- Anomaly Detection for Time Series Using VAE-LSTM Hybrid Model . I am curious about the exact structure of your model and I am doing some research on Anomaly Detection for Time Series , so I click the link you leave on your paper . However , I find nothing but a readme in the repository . If you have no spare time to post your code, could you please tell me the detail of your model's structure. Best wishes to you.

lin-shuyu commented 4 years ago

Hi Fire-keeper,

Many thanks for getting in touch and going through our paper! Apologies on not seeing your question earlier - I'm still new to releasing code on github.

Our VAE model uses simple CNN operations. We treat the time series as a 2D signal, where the feature channel is the 2nd spatial dimension. This would allow the model to learn convolutional filters that capture correlations across different channel dimensions, if your time series contains multiple channel.

We just released our code today! So detailed architecture of our model can be found in models.py, which is positioned in this repository under code/ subfolder.

Hope you find our paper and codes useful. Looking forward to seeing your anomaly detection algorithm soon!

Best wishes, Lin

haopo2005 commented 3 years ago

Hi, if there are multiple channel for my time series, then there is no need to expand the feature? input_tensor = tf.expand_dims(self.original_signal, -1)

Will it be different result between multiple conv over each signal and single conv over multiple signal ?

lin-shuyu commented 3 years ago

Hi Haopo,

Apologies on the delay in getting back!

I have tried this method with multi-channel time series. Yes, in this case, you don't need to expand_dims of the original_signal.

We use conv2d because we want to take care of the cross-channel dependency / correlations in the conv kernels. I think the only thing you need to change is to remove expand_dims, as you spotted above. Also, you will need to change the "n_channel" in the config.jason file to the number of channels in your dataset.

You probably need to process your dataset in the format we did with NAB dataset. Sorry the dataloader.py probably was not the most well documented code.

Hope this helps!

Best wishes, Lin