Closed KamakshiOjha closed 1 week ago
Thank you for creating this issue! We'll look into it as soon as possible. Your contributions are highly appreciated! ๐
What are the CNN architectures you are planning to use here? Apart from CNN what are the models you are planning to implement for this EEG dataset?
@KamakshiOjha
Branch 1:
Conv2D
with 16 filters, kernel size (1, 3), ReLU activationConv2D
with 32 filters, kernel size (1, 3), ReLU activationConv2D
with 64 filters, kernel size (1, 3), ReLU activationcbam_block
applied to the outputBranch 2:
Conv2D
with 16 filters, kernel size (1, 5), ReLU activationConv2D
with 32 filters, kernel size (1, 5), ReLU activationConv2D
with 64 filters, kernel size (1, 5), ReLU activationcbam_block
applied to the outputBranch 3:
Conv2D
with 16 filters, kernel size (1, 7), ReLU activationConv2D
with 32 filters, kernel size (1, 7), ReLU activationConv2D
with 64 filters, kernel size (1, 7), ReLU activationcbam_block
applied to the outputAdd
layer.Conv2D
with 64 filters, kernel size (1, 3), ReLU activationConv2D
with 128 filters, kernel size (1, 3), ReLU activationConv2D
with 128 filters, kernel size (1, 3), ReLU activationGlobalAveragePooling2D
applied to the last convolutional layer.Dense
with 512 units, ELU activationDense
with 256 units, ELU activationDense
with 128 units, ELU activationDense
with 32 units, ELU activationDense
with 2 units, Softmax activation (for classification)Since my dataset consists of EEG signals, I've experimented with various CNN architectures to find the best results. Additionally, I've incorporated an attention mechanism into my model to enhance its performance.
Input Layer:
- Input Shape: (30, 128, 1)
Convolutional Layers:
Branch 1:
Conv2D
with 16 filters, kernel size (1, 3), ReLU activationConv2D
with 32 filters, kernel size (1, 3), ReLU activationConv2D
with 64 filters, kernel size (1, 3), ReLU activationcbam_block
applied to the outputBranch 2:
Conv2D
with 16 filters, kernel size (1, 5), ReLU activationConv2D
with 32 filters, kernel size (1, 5), ReLU activationConv2D
with 64 filters, kernel size (1, 5), ReLU activationcbam_block
applied to the outputBranch 3:
Conv2D
with 16 filters, kernel size (1, 7), ReLU activationConv2D
with 32 filters, kernel size (1, 7), ReLU activationConv2D
with 64 filters, kernel size (1, 7), ReLU activationcbam_block
applied to the outputCombined Branch:
- The outputs from the branches are combined using an
Add
layer.Further Convolutional Layers:
Conv2D
with 64 filters, kernel size (1, 3), ReLU activationConv2D
with 128 filters, kernel size (1, 3), ReLU activationConv2D
with 128 filters, kernel size (1, 3), ReLU activationGlobal Pooling and Fully Connected Layers:
GlobalAveragePooling2D
applied to the last convolutional layer.Fully connected (Dense) layers:
Dense
with 512 units, ELU activationDense
with 256 units, ELU activationDense
with 128 units, ELU activationDense
with 32 units, ELU activation- Output
Dense
with 2 units, Softmax activation (for classification)
Cool. Go ahead with this approach.
Assigned @KamakshiOjha
Hello @KamakshiOjha! Your issue #784 has been closed. Thank you for your contribution!
Deep Learning Simplified Repository (Proposing new issue)
:red_circle: Project Title : Drowsiness Detection Using EEG Signals :red_circle: Aim : To develop a deep learning model to detect drowsiness from EEG signals using various algorithms and compare their performance to identify the best-fitted algorithm based on accuracy scores. :red_circle: Dataset : https://figshare.com/articles/dataset/EEG_driver_drowsiness_dataset/14273687 :red_circle: Approach :
Exploratory Data Analysis (EDA):
Model Development:
Model Training and Evaluation:
Visualization and Conclusion:
๐ Follow the Guidelines to Contribute in the Project :
requirements.txt
- This file will contain the required packages/libraries to run the project in other machines.Model
folder, theREADME.md
file must be filled up properly, with proper visualizations and conclusions.:red_circle::yellow_circle: Points to Note :
:white_check_mark: To be Mentioned while taking the issue :
Happy Contributing ๐
All the best. Enjoy your open source journey ahead. ๐