By Seongju Lee, Yeonguk Yu, Seunghyeok Back, Hogeon Seo, and Kyoobin Lee
This repo is the official implementation of "SleePyCo: Automatic Sleep Scoring with Feature Pyramid and Contrastive Learning", accepted to Expert Systems With Applications (I.F. 8.5).
./tools
test_custom.py
Trained and evaluated on NVIDIA GeForce RTX 3090 with python 3.8.5.
Set up a python environment
conda create -n sleepyco python=3.8.5
conda activate sleepyco
Install PyTorch with compatible version to your develop env from PyTorch official website.
Install remaining libraries using the following command.
pip install -r requirements.txt
Download Sleep-EDF-201X
dataset via following command. (X
will be 3
or 8
)
cd ./dset/Sleep-EDF-201X
python download_sleep-edf-201X.py
Check the directory structure as follows
./dset/
βββ Sleep-EDF-201X/
βββ edf/
βββ SC4001E0-PSG.edf
βββ SC4001EC-Hypnogram.edf
βββ SC4002E0-PSG.edf
βββ SC4002EC-Hypnogram.edf
βββ ...
Preprocess .edf
files into .npz
.
python prepare_sleep-edf-201X.py
Check the directory structure as follows
./dset/
βββ Sleep-EDF-201X/
βββ edf/
β βββ SC4001E0-PSG.edf
β βββ SC4001EC-Hypnogram.edf
β βββ SC4002E0-PSG.edf
β βββ SC4002EC-Hypnogram.edf
β βββ ...
β
βββ npz/
βββ SC4001E0-PSG.npz
βββ SC4002E0-PSG.npz
βββ ...
python train_crl.py --config configs/SleePyCo-Transformer_SL-01_numScales-1_{DATASET_NAME}_pretrain.json --gpu $GPU_IDs
When one GeForce RTX 3090 GPU is used, it may requires 22.3 GB of GPU memory.
python train_mtcl.py --config configs/SleePyCo-Transformer_SL-10_numScales-3_{DATASET_NAME}_freezefinetune.json --gpu $GPU_IDs
When two GeForce RTX 3090 GPU is used, it may requires 16.7 GB of GPU memory each.
If you use PyTorch $\geq$ 2.0.0, it may requires only 5.4 GB of GPU memory.
python train_mtcl.py --config configs/SleePyCo-Transformer_SL-10_numScales-3_{DATASET_NAME}_scratch.json --gpu $GPU_IDs
Dataset | Subset | Channel | ACC | MF1 | Kappa | W | N1 | N2 | N3 | REM | Checkpoints |
---|---|---|---|---|---|---|---|---|---|---|---|
Sleep-EDF-2013 | SC | Fpz-Cz | 86.8 | 81.2 | 0.820 | 91.5 | 50.0 | 89.4 | 89.0 | 86.3 | Link |
Sleep-EDF-2018 | SC | Fpz-Cz | 84.6 | 79.0 | 0.787 | 93.5 | 50.4 | 86.5 | 80.5 | 84.2 | Link |
MASS | SS1-SS5 | C4-A1 | 86.8 | 82.5 | 0.811 | 89.2 | 60.1 | 90.4 | 83.8 | 89.1 | Link |
Physio2018 | - | C3-A2 | 80.9 | 78.9 | 0.737 | 84.2 | 59.3 | 85.3 | 79.4 | 86.3 | Link |
SHHS | shhs-1 | C4-A1 | 87.9 | 80.7 | 0.830 | 92.6 | 49.2 | 88.5 | 84.5 | 88.6 | Link |
python download_checkpoints.py
.You can download all checkpoints using following command:
cd checkpoints
python download_checkpoints.py
You can also select checkpoints as follows:
cd checkpoints
python download_checkpoints.py --datasets 'Sleep-EDF-2013' 'Sleep-EDF-2018'
python test.py --config configs/SleePyCo-Transformer_SL-10_numScales-3_{DATASET_NAME}_freezefinetune.json --gpu $GPU_IDs
Prepare custom data with the numpy array of shape (1, 1, 30000)
. It represents 10 input epochs.
Replace line 67 in test_custom.py
to load your custom data.
Choose the pretrained dataset and fold to load checkpoint and run following command.
python test_custom.py --config configs/SleePyCo-Transformer_SL-10_numScales-3_{DATASET_NAME}_freezefinetune.json --fold $FOLD --gpu $GPU_IDs
If you have an error like Access denied with the following error:...
, install pre-released version of gdown
using following command:
pip install -U --no-cache-dir gdown --pre
The source code of this repository is released only for academic use. See the license file for details.
@article{lee2024sleepyco,
title = {SleePyCo: Automatic sleep scoring with feature pyramid and contrastive learning},
journal = {Expert Systems with Applications},
volume = {240},
pages = {122551},
year = {2024},
issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2023.122551},
url = {https://www.sciencedirect.com/science/article/pii/S0957417423030531},
author = {Seongju Lee and Yeonguk Yu and Seunghyeok Back and Hogeon Seo and Kyoobin Lee}
}
This research was supported by a grant from the Institute of Information and Communications Technology Planning and Evaluation (IITP) funded by the Korean government (MSIT) (No. 2020-0-00857, Development of cloud robot intelligence augmentation, sharing and framework technology to integrate and enhance the intelligence of multiple robots). Furthermore, this research was partially supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korean government (MOTIE) (No. 20202910100030).