bakqui / ST-MEM

[ICLR2024] Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram
Other
23 stars 2 forks source link

ST-MEM: Spatio-Temporal Masked Electrocardiogram Modeling

This is an official implementation of "Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram".

Paper: https://openreview.net/pdf?id=WcOohbsF4H

Environments

Requirements

Installation

(base) user@server:~$ conda create -n st_mem python=3.9
(base) user@server:~$ conda activate st_mem
(st_mem) user@server:~$ conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
(st_mem) user@server:~$ git clone https://github.com/bakqui/ST-MEM.git
(st_mem) user@server:~$ cd ST-MEM
(st_mem) user@server:~/ST-MEM$ pip install -r requirements.txt

Pre-training

To pre-train ST-MEM with ViT-B/75 encoder, run the following:

bash run_pretrain.sh \
    --gpus ${GPU_IDS} \
    --config_path ./configs/pretrain/st_mem.yaml \
    --output_dir ${OUTPUT_DIRECTORY} \
    --exp_name ${EXPERIMENT_NAME}

We present the pre-trained ST-MEM encoder:

Downstream training

To fine-tune the ST-MEM ViT-B/75 encoder, run the following:

bash run_downstream.sh \
    --gpus ${GPU_IDS} \
    --config_path ./configs/downstream/st_mem.yaml \
    --output_dir ${OUTPUT_DIRECTORY} \
    --exp_name ${EXPERIMENT_NAME} \
    --encoder_path ${PRETRAINED_ENCODER_PATH}

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{na2024guiding,
  title     = {Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram},
  author    = {Na, Yeongyeon and 
               Park, Minje and 
               Tae, Yunwon and 
               Joo, Sunghoon},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://openreview.net/forum?id=WcOohbsF4H}
}