lamtung16 / ML_ChangepointDetection

0 stars 0 forks source link

Research Direction: Experiment different models #14

Open lamtung16 opened 1 week ago

lamtung16 commented 1 week ago
  1. Attention Mechanisms: Attention mechanisms are powerful tools for predicting outputs from sequences of varying lengths by dynamically focusing on the most relevant parts of the input. Instead of processing data sequentially like RNNs, attention assigns importance scores (weights) to each element in the sequence, enabling the model to capture long-range dependencies and contextual relationships more effectively. This flexibility allows attention to handle inputs and outputs of different lengths, making it especially useful in tasks like machine translation, where the source and target sequences often differ in size.
  2. CNNs: CNNs can process sequences of varying lengths and produce a single output by using convolutional layers to extract features across the sequence and global pooling layers (e.g., Global Max or Average Pooling) to aggregate these features into a fixed-size representation. This fixed-size vector is then passed through dense layers to produce the final output, such as a classification label or regression value. Padding can be applied to shorter sequences for batch processing, while masking ensures that padding does not influence the result. This makes CNNs suitable for tasks like text classification or time-series prediction, where input lengths may vary.
  3. Gaussian Process Regression: GPR uses features extracted from sequences (or embeddings from models like RNNs/CNNs) as input to predict both a mean and variance, enabling probabilistic outputs well-suited for interval regression. It excels with small datasets where neural networks might overfit, but its scalability is limited due to cubic computational costs with increasing sample size. Additionally, its performance heavily depends on the quality of input features or embeddings.
  4. Mixture Density Networks: MDNs can predict one output from sequences of varying lengths by using a neural network, often combined with feature extractors like RNNs or CNNs, to predict the parameters of a probability distribution (e.g., mean and variance). Features from the sequence are summarized into a fixed-size representation (e.g., via pooling), enabling the MDN to handle varying input lengths. The predicted distribution allows for interval predictions, capturing uncertainty and modeling multimodal outputs. However, training can be complex due to mode collapse, and the outputs may be less interpretable than those of direct regression models.