sintel-dev / Orion

Library for detecting anomalies in signals
https://sintel.dev/Orion/
MIT License
1.06k stars 163 forks source link

Can Orion handle training a 2TB dataset? #567

Open bigmisspanda opened 2 months ago

bigmisspanda commented 2 months ago

Description

In my case, the training data is very large and cannot be loaded into memory all at once. It seems that time_segments_aggregate, SimpleImputer, MinMaxScaler, and rolling_window_sequencesin the pipeline all require the data to be stored in memory. Can Orion handle training a 2-10TB dataset?

sarahmish commented 2 months ago

Hi @bigmisspanda – thank you for your question!

You are right, all the preprocessing primitives require to be in memory.

One work around can be to replace these primitives with your own scalable functions and then start the Orion pipeline from the modeling primitive directly. Another can be to chunk up your training data and training the pipeline on each chunk.

bigmisspanda commented 2 months ago

Hi @bigmisspanda – thank you for your question!

You are right, all the preprocessing primitives require to be in memory.

One work around can be to replace these primitives with your own scalable functions and then start the Orion pipeline from the modeling primitive directly. Another can be to chunk up your training data and training the pipeline on each chunk.

Yes, thank you for your help. I understand what you mean. My plan is to use TadGANto train an anomaly detection model. My data comes from power equipment sensors and has over 20 features. If I train in chunks, the MinMaxScalerresults will not be globally distributed. I referred to the information in this document, image and my plan is:

  1. To use MinMaxScaler's partial_fitfor global calculations on the dataset in advance
  2. split the data into chunks.
  3. To remove the MinMaxScalerfrom the third step in the primitives
  4. To train the model

Is my approach feasible? Can TadGANperform similar partial_fit training from a continuous stream data?

sarahmish commented 2 months ago

Your plan looks logical to me!

I'm not too familiar with what partial_fit does under the hood, however, calling fit on multiple times on different data chunks seems analogous to their concept of "incremental learning".

bigmisspanda commented 2 months ago

Your plan looks logical to me!

I'm not too familiar with what partial_fit does under the hood, however, calling fit on multiple times on different data chunks seems analogous to their concept of "incremental learning".

The concept of partial_fit is consistent with incremental learning. I will follow this approach for testing and training. Thank you for your great work!