remotebiosensing / rppg

Benchmark Framework for fair evaluation of rPPG
Apache License 2.0
249 stars 30 forks source link

train on VIPL_HR based on MTTS-CAN #8

Closed huq02 closed 2 years ago

huq02 commented 2 years ago

Hi, I want to train on VIPL_HR(hr、SpO2) based on MTTS-CAN, I have generate training file (VIPL_HR_Train.h5dfVIPL_HR_Train.h5df). During the training , the index error will appear. image

Can you help me to solve the problem?

SpicyYeol commented 2 years ago

Hi, It seems to be appearance at the MTTSDataset.py. our MTTSDataset.py use the length of hr_label. I think your appearance_img's data set size isn't same with the hr_label. (Total length of img dataset, batch, channel, height, width) (Total length of hr,batch, hr_value) //Total Length of img dataset != Total Length of hr

I think you need to check the data preprocessing sequence.

huq02 commented 2 years ago

Thanks. I confirm the lenght of hr_label and img_dataset. I have other question: I get the bvp singal,how I convert the bvp to hr、SpO2、rr?

---- 回复的原邮件 ---- | 发件人 | Daeyeol @.> | | 日期 | 2022年04月29日 10:52 | | 收件人 | @.> | | 抄送至 | @.**@.> | | 主题 | Re: [TVS-AI/Pytorch_rppgs] train on VIPL_HR based on MTTS-CAN (Issue #8) |

Hi, It seems to be appearance at the MTTSDataset.py. our MTTSDataset.py use the length of hr_label. I think your appearance_img's data set size isn't same with the hr_label. (Total length of img dataset, batch, channel, height, width) (Total length of hr,batch, hr_value) //Total Length of img dataset != Total Length of hr

I think you need to check the data preprocessing sequence.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

If you get a perfect BVP signal, you can get HR and RR information. But, SpO2 isn't.

  1. About SpO2 information

BVP signal consists of all Hemoglobin diffuse reflection information. But. SpO2 consists of OxyHemoglobin and DeOxyHemoglobin information.

In the MTTS case, they use the Multi-Task approach. Multi-Task approach can find the two labels common features. So, If you try to train with SpO2 and BVP signal, You are backbone Network learned about Oxy and Deoxy hemoglobins information.

  1. How to get Hr and RR

I recommend using the Band Pass Filter. generally. Hr has the 0.8~4Hz( 48bpm to 240bpm) band. and RR has 0.1 ~ 0.5Hz(6 rr to 30rr).

The Second method is using peak detection. all BVP has a pattern like PQRST pattern. If you can find the peak of BVP signal, you can estimate the hr.

I introduce two python package.

huq02 commented 2 years ago

Thanks!

---- 回复的原邮件 ---- | 发件人 | Daeyeol @.> | | 日期 | 2022年04月29日 11:28 | | 收件人 | @.> | | 抄送至 | @.**@.> | | 主题 | Re: [TVS-AI/Pytorch_rppgs] train on VIPL_HR based on MTTS-CAN (Issue #8) |

If you get a perfect BVP signal, you can get HR and RR information. But, SpO2 isn't.

About SpO2 information

BVP signal consists of all Hemoglobin diffuse reflection information. But. SpO2 consists of OxyHemoglobin and DeOxyHemoglobin information.

In the MTTS case, they use the Multi-Task approach. Multi-Task approach can find the two labels common features. So, If you try to train with SpO2 and BVP signal, You are backbone Network learned about Oxy and Deoxy hemoglobins information.

How to get Hr and RR

I recommend using the Band Pass Filter. generally. Hr has the 0.8~4Hz( 48bpm to 240bpm) band. and RR has 0.1 ~ 0.5Hz(6 rr to 30rr).

The Second method is using peak detection. all BVP has a pattern like PQRST pattern. If you can find the peak of BVP signal, you can estimate the hr.

I introduce two python package.

BIOSPPY (https://biosppy.readthedocs.io/en/stable/index.html) PyVHR (https://github.com/phuselab/pyVHR)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

If you have any other questions about the rPPG area, give me a mail. have a good day!.

huq02 commented 2 years ago

Hi, I want to convert bvp singals to hr value(bpm), and test by camera. How to do?

At 2022-04-29 11:28:06, "Daeyeol Kim" @.***> wrote:

If you get a perfect BVP signal, you can get HR and RR information. But, SpO2 isn't.

About SpO2 information

BVP signal consists of all Hemoglobin diffuse reflection information. But. SpO2 consists of OxyHemoglobin and DeOxyHemoglobin information.

In the MTTS case, they use the Multi-Task approach. Multi-Task approach can find the two labels common features. So, If you try to train with SpO2 and BVP signal, You are backbone Network learned about Oxy and Deoxy hemoglobins information.

How to get Hr and RR

I recommend using the Band Pass Filter. generally. Hr has the 0.8~4Hz( 48bpm to 240bpm) band. and RR has 0.1 ~ 0.5Hz(6 rr to 30rr).

The Second method is using peak detection. all BVP has a pattern like PQRST pattern. If you can find the peak of BVP signal, you can estimate the hr.

I introduce two python package.

BIOSPPY (https://biosppy.readthedocs.io/en/stable/index.html) PyVHR (https://github.com/phuselab/pyVHR)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

huq02 commented 2 years ago

Thanks. You have finished hr value estimate? Can you share the code?

At 2022-05-05 11:52:11, "Daeyeol Kim" @.***> wrote:

Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

Try to use bvp.bvp in biosppy. this function use elgendi algorithm.

Bvp.bvp(signal,sampling_rate)

https://biosppy.readthedocs.io/en/stable/biosppy.signals.html#biosppy.signals.bvp.bvp

2022년 5월 5일 (목) 오후 6:46, huq02 @.***>님이 작성:

Thanks. You have finished hr value estimate? Can you share the code?

At 2022-05-05 11:52:11, "Daeyeol Kim" @.***> wrote:

Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/TVS-AI/Pytorch_rppgs/issues/8#issuecomment-1118366464, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQAS77E5A2EGLWQPXWXJCTVIOKGXANCNFSM5UUK6P5A . You are receiving this because you modified the open/close state.Message ID: @.***>

huq02 commented 2 years ago

Hi, How can I to test hr by camera?

At 2022-05-05 17:54:33, "Daeyeol Kim" @.***> wrote:

Try to use bvp.bvp in biosppy. this function use elgendi algorithm.

Bvp.bvp(signal,sampling_rate)

https://biosppy.readthedocs.io/en/stable/biosppy.signals.html#biosppy.signals.bvp.bvp

2022년 5월 5일 (목) 오후 6:46, huq02 @.***>님이 작성:

Thanks. You have finished hr value estimate? Can you share the code?

At 2022-05-05 11:52:11, "Daeyeol Kim" @.***> wrote:

Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/TVS-AI/Pytorch_rppgs/issues/8#issuecomment-1118366464, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQAS77E5A2EGLWQPXWXJCTVIOKGXANCNFSM5UUK6P5A . You are receiving this because you modified the open/close state.Message ID: @.***>

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

of course.

https://github.com/ubicomplab/rppg-web

https://vitals.cs.washington.edu/

huq02 commented 2 years ago

Thanks!I have solved this problem。 I also want to known how to convert or get the PP-Net training data!Mat file?

在 2022-05-10 16:19:41,"Daeyeol Kim" @.***> 写道:

of course.

https://github.com/ubicomplab/rppg-web

https://vitals.cs.washington.edu/

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

You can get information in main/utils/seq_preprocess.py

Can I get your preprocessing sequence for VIPL-dataset ?

huq02 commented 2 years ago

You need VIPL-HR dataset? How can I copy to you? I known the seq_preprocess.py . what is the "path" ? origin label file or need convert to label_Mat file.

At 2022-05-18 13:51:24, "Daeyeol Kim" @.***> wrote:

You can get information in main/utils/seq_preprocess.py

Can I get your preprocessing sequence for VIPL-dataset ?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago
  1. I known the seq_preprocess.py . what is the "path" ? origin label file or need convert to label_Mat file.
    • do you mean about dataset.mat file?
  2. You need VIPL-HR dataset? How can I copy to you?
    • No. I have VIPL-HR dataset already. but VIPL-HR dataset can't train well. so I want to know how to preprocess VIPL-HR dataset
huq02 commented 2 years ago

I also want to known you have finish MTTS-CAN training on UBFC or UBFC_Phys, how to get traing file(hdf5) and test file(hdf5)?
How to calculate directly from BVP to get Respiratory rate?

At 2022-05-18 14:00:45, "胡强" @.***> wrote:

You need VIPL-HR dataset? How can I copy to you? I known the seq_preprocess.py . what is the "path" ? origin label file or need convert to label_Mat file.

At 2022-05-18 13:51:24, "Daeyeol Kim" @.***> wrote:

You can get information in main/utils/seq_preprocess.py

Can I get your preprocessing sequence for VIPL-dataset ?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago
  1. Yes. what can I do to get dataset.mat based on UBFC or UBFC_Phys?

  2. I begin to training physnet to hr based on VIPL-HR dataset.

  3. I have finished epoch

At 2022-05-18 14:03:58, "Daeyeol Kim" @.***> wrote:

I known the seq_preprocess.py . what is the "path" ? origin label file or need convert to label_Mat file. do you mean about dataset.mat file? You need VIPL-HR dataset? How can I copy to you? No. I have VIPL-HR dataset already. but VIPL-HR dataset can't train well. so I want to know how to preprocess VIPL-HR dataset

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

In PP-NET paper, they use the "MIMIC" dataset. you can find "MIMIC" dataset from https://physionet.org/.

There is no mat file which is preprocessed with UBFC dataset. And then, ubfc and ubfc_phy need to certificate from data owner.

You can get pretrained model(Duplicate of #1 )

My Question is " How to preprocess the VIPL-HR dataset". which method were you use? I think there is some timing issue with Video and label. also have label shape issue. When I searched other research's repo, they use preprocessd label for training. So, I wonder how to preproceee label.

huq02 commented 2 years ago

I am sorry. I cannot deal with the problem.

At 2022-05-18 14:42:48, "Daeyeol Kim" @.***> wrote:

In PP-NET paper, they use the "MIMIC" dataset. you can find "MIMIC" dataset from https://physionet.org/.

There is no mat file which is preprocessed with UBFC dataset. And then, ubfc and ubfc_phy need to certificate from data owner.

You can get pretrained model(Duplicate of #1 )

My Question is " How to preprocess the VIPL-HR dataset". which method were you use? I think there is some timing issue with Video and label. also have label shape issue. When I searched other research's repo, they use preprocessd label for training. So, I wonder how to preproceee label.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

It's OK :)

huq02 commented 2 years ago

I also want to known you have finish MTTS-CAN training on UBFC or UBFC_Phys, how to get traing file(hdf5) and test file(hdf5)?
How to calculate directly from BVP to get Respiratory rate?

At 2022-05-18 15:11:31, "Daeyeol Kim" @.***> wrote:

It's OK :)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

You can get dataset as follow website:

UBFC1 / https://sites.google.com/view/ybenezeth/ubfcrppg UBFC2 / https://sites.google.com/view/ybenezeth/ubfcrppg

and, preprocess that dataset. you can make hdf5 file. you can't get the preprocessed train/test hdf5 file. but you can make the hdf5 use downloaded data.

huq02 commented 2 years ago

Thanks!
I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

At 2022-05-18 15:32:24, "Daeyeol Kim" @.***> wrote:

You can get dataset as follow website:

UBFC1 / https://sites.google.com/view/ybenezeth/ubfcrppg UBFC2 / https://sites.google.com/view/ybenezeth/ubfcrppg

and, preprocess that dataset. you can make hdf5 file. you can't get the preprocessed train/test hdf5 file. but you can make the hdf5 use downloaded data.

How to calculate directly from BVP to get Respiratory rate? you need to make estimate bvp signal longer than fps. fft BPF [0.18 - 0.5Hz] select the highest frequency, and then mutiply * 60

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

Thanks! I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

` public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

    this.gaussian_w = gaussian_w;
    this.BUFFER_SIZE = BUFFER_SIZE;

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        float[] pixel_reconstruct = new float[BUFFER_SIZE];
        for (int j = 0; j < BUFFER_SIZE; j++) {
            pixel_reconstruct[j] = f_pixel_buff[j][i];
        }//fft하기 위함

        noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

        // index 0,1은 dc 나머지 정상 나이퀴스트
        //filter
        fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

        for (int j = 0; j < BUFFER_SIZE / 2; j++) {

            fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal
            fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal
        }
    }
    return fft_buffer;
}

public int HR_index(float[][] fft_buffer) {

    float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        for (int j = 0; j < BUFFER_SIZE / 2; j++) {
            avg_freq[j] += fft_buffer[i][j * 2 + 2];
        }
    }// 누적이미지 전체 픽셀에 대해 frequency 평균구함

    int index = 0;
    float max = 0.0f;
    for (int i = 0; i < BUFFER_SIZE / 2; i++) {
        avg_freq[i] /= (BUFFER_SIZE / 2);
        if (avg_freq[i] > max) {
            max = avg_freq[i];
            index = i;
        }
    }
    return index;
}
public int RR_index(float[][] fft_buffer) {

    float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

    for (int i = 0; i < gaussian_w * gaussian_w; i++) {
        for (int j = 0; j < BUFFER_SIZE / 2; j++) {
            avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용
        }
    }// 누적이미지 전체 픽셀에 대해 frequency 평균구함

    int index = 0;
    float max = 0.0f;
    for (int i = 0; i < BUFFER_SIZE / 2; i++) {
        avg_freq[i] /= (BUFFER_SIZE / 2);
        if (avg_freq[i] > max) {
            max = avg_freq[i];
            index = i;
        }
    }
    return index;
}

`

huq02 commented 2 years ago

Thank you very much! Can you provide the code to training(MTTS-CAN) based on UBFC? Because I cannot convert the UBFC to training datafile sucess. MTTC-CAN can get hr and rr together.

At 2022-05-18 16:01:44, "Daeyeol Kim" @.***> wrote:

Thanks! I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

Yes, I have. but It's been a while so I can't remember the result. I remember, may be It train well just use UBFC.

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

I have code. But.. It use java. ` public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

this.gaussian_w = gaussian_w;

this.BUFFER_SIZE = BUFFER_SIZE;

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  float[] pixel_reconstruct = new float[BUFFER_SIZE];

  for (int j = 0; j < BUFFER_SIZE; j++) {

      pixel_reconstruct[j] = f_pixel_buff[j][i];

  }//fft하기 위함

  noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

  // index 0,1은 dc 나머지 정상 나이퀴스트

  //filter

  fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal

      fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal

  }

}

return fft_buffer;

}

public int HR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 2];

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} public int RR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} `

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

Hi, you have pure datasets?

At 2022-05-18 16:01:44, "Daeyeol Kim" @.***> wrote:

Thanks! I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

Yes, I have. but It's been a while so I can't remember the result. I remember, may be It train well just use UBFC.

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

I have code. But.. It use java. ` public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

this.gaussian_w = gaussian_w;

this.BUFFER_SIZE = BUFFER_SIZE;

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  float[] pixel_reconstruct = new float[BUFFER_SIZE];

  for (int j = 0; j < BUFFER_SIZE; j++) {

      pixel_reconstruct[j] = f_pixel_buff[j][i];

  }//fft하기 위함

  noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

  // index 0,1은 dc 나머지 정상 나이퀴스트

  //filter

  fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal

      fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal

  }

}

return fft_buffer;

}

public int HR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 2];

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} public int RR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} `

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

Hi: I have get the hr、hrv、rr。 I have some questions (1)why I training the VIPL_HR dataset , the val_loss don't convergence。The error is relatively large。Is there a problem with preprocessing? (2)Do you known the UBFC_2(label: groundtruth.txt) how to get?

At 2022-05-18 16:01:44, "Daeyeol Kim" @.***> wrote:

Thanks! I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

Yes, I have. but It's been a while so I can't remember the result. I remember, may be It train well just use UBFC.

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

I have code. But.. It use java. ` public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

this.gaussian_w = gaussian_w;

this.BUFFER_SIZE = BUFFER_SIZE;

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  float[] pixel_reconstruct = new float[BUFFER_SIZE];

  for (int j = 0; j < BUFFER_SIZE; j++) {

      pixel_reconstruct[j] = f_pixel_buff[j][i];

  }//fft하기 위함

  noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

  // index 0,1은 dc 나머지 정상 나이퀴스트

  //filter

  fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal

      fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal

  }

}

return fft_buffer;

}

public int HR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 2];

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} public int RR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} `

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

Hi: I have get the hr、hrv、rr。 I have some questions (1)why I training the VIPL_HR dataset , the val_loss don't convergence。The error is relatively large。Is there a problem with preprocessing?

(2)Do you known the UBFC_2(label: groundtruth.txt) how to get?

huq02 commented 2 years ago

please see the picture。

At 2022-05-26 16:41:56, "胡强" @.***> wrote:

Hi: I have get the hr、hrv、rr。 I have some questions (1)why I training the VIPL_HR dataset , the val_loss don't convergence。The error is relatively large。Is there a problem with preprocessing? (2)Do you known the UBFC_2(label: groundtruth.txt) how to get?

At 2022-05-18 16:01:44, "Daeyeol Kim" @.***> wrote:

Thanks! I have got the UBFC1 and UBFC2. I want to known you have training MTTS-CAN based on UBFC to get hr and rr?

Yes, I have. but It's been a while so I can't remember the result. I remember, may be It train well just use UBFC.

I don't quite understand your advice about to get rr。Can you provide the code about rr? I get the total_rPPG, next to do to get rr?

I have code. But.. It use java. ` public float[][] FFT_trans(int gaussian_w, int BUFFER_SIZE, float[][] f_pixel_buff, Noise noise, float[][] fft_buffer, boolean[] HR_filter, boolean[] RR_filter) {

this.gaussian_w = gaussian_w;

this.BUFFER_SIZE = BUFFER_SIZE;

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  float[] pixel_reconstruct = new float[BUFFER_SIZE];

  for (int j = 0; j < BUFFER_SIZE; j++) {

      pixel_reconstruct[j] = f_pixel_buff[j][i];

  }//fft하기 위함

  noise.fft(pixel_reconstruct, fft_buffer[i]); //fft

  // index 0,1은 dc 나머지 정상 나이퀴스트

  //filter

  fft_buffer[i][0] = fft_buffer[i][1] = 0; // fft dc component

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      fft_buffer[i][j*2+1] = fft_buffer[i][j*2+2] * (RR_filter[j] ?1.0f:0.0f);//use imagenary part temporary// fft realvalue filtering for RR_cal

      fft_buffer[i][j * 2 + 2] = fft_buffer[i][j * 2 + 2] * (HR_filter[j] ? 1.0f : 0.0f); // fft real value filtering for HR_cal

  }

}

return fft_buffer;

}

public int HR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 2];

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} public int RR_index(float[][] fft_buffer) {

float[] avg_freq = new float[BUFFER_SIZE / 2]; // imagenery field는 사용하지 않음으로 절반

for (int i = 0; i < gaussian_w * gaussian_w; i++) {

  for (int j = 0; j < BUFFER_SIZE / 2; j++) {

      avg_freq[j] += fft_buffer[i][j * 2 + 1]; //1을 저장공간으로 사용

  }

}// 누적이미지 전체 픽셀에 대해 frequency 평균구함

int index = 0;

float max = 0.0f;

for (int i = 0; i < BUFFER_SIZE / 2; i++) {

  avg_freq[i] /= (BUFFER_SIZE / 2);

  if (avg_freq[i] > max) {

      max = avg_freq[i];

      index = i;

  }

}

return index;

} `

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

https://github.com/ZitongYu/PhysFormer

huq02 commented 2 years ago

I have achieve PhysFormer demo。 but not the VIPL_HR dataset preprocessing skill。

At 2022-05-26 18:27:00, "Daeyeol Kim" @.***> wrote:

https://github.com/ZitongYu/PhysFormer

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

Hi, Do you achieve the RhythmNet link: https://github.com/AnweshCR7/RhythmNet. I want to known the dataset propossing?

在 2022-05-26 18:27:00,"Daeyeol Kim" @.***> 写道:

https://github.com/ZitongYu/PhysFormer

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

look this link. here it is

https://github.com/AnweshCR7/RhythmNet/blob/main/src/utils/video2st_maps.py

huq02 commented 2 years ago

I have finish propossing video。I want to known proposs the label。
the wave of VIPL_HR have no Time or peaks. How can I do?

At 2022-05-27 18:08:04, "Daeyeol Kim" @.***> wrote:

look this link. here it is

https://github.com/AnweshCR7/RhythmNet/blob/main/src/utils/video2st_maps.py

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

yes. I have same situation in my experiment. so, there is so many timing issue.

i guess that is one of reason for use long signal.

I'm experimenting with various timing issues right now, so I'll tell you when I solve it.

huq02 commented 2 years ago

Do you deal with the problem that preprocessing the VIPL_HR. I found a problem I want to discuss with you:

   I don't use the val datasets。 The train_loss(p1_p10) can converge, MAE: 7.6 bpm(p1_v1_source3.hdf5)   but train_loss(p1_p46) is nan。
   I don't known why, Have you ever had a similar problem?

At 2022-05-30 11:28:24, "Daeyeol Kim" @.***> wrote:

yes. I have same situation in my experiment. so, there is so many timing issue.

i guess that is one of reason for use long signal.

I'm experimenting with various timing issues right now, so I'll tell you when I solve it.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

Hi, I have finish the training on UBFC+VIPL_HR. (MAE:10bmp) Can you share the PURE dataset?

At 2022-05-30 11:28:24, "Daeyeol Kim" @.***> wrote:

yes. I have same situation in my experiment. so, there is so many timing issue.

i guess that is one of reason for use long signal.

I'm experimenting with various timing issues right now, so I'll tell you when I solve it.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

SpicyYeol commented 2 years ago

No, It must be shared with the permission of the data creator. From my experience, use v4v rather than PURE.

SpicyYeol commented 2 years ago

Which model did you successfully train with?

huq02 commented 2 years ago

TS_CAN

---- Replied Message ---- | From | Daeyeol @.> | | Date | 06/15/2022 21:04 | | To | @.> | | Cc | @.**@.> | | Subject | Re: [TVS-AI/Pytorch_rppgs] train on VIPL_HR based on MTTS-CAN (Issue #8) |

Which model did you successfully train with?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

I want to confirm the results compare with the MTTS_CAN。 I need PURE 、MMSE_HR、AFRL。

At 2022-06-15 17:44:49, "胡强" @.***> wrote:

Hi, I have finish the training on UBFC+VIPL_HR. (MAE:10bmp) Can you share the PURE dataset?

At 2022-05-30 11:28:24, "Daeyeol Kim" @.***> wrote:

yes. I have same situation in my experiment. so, there is so many timing issue.

i guess that is one of reason for use long signal.

I'm experimenting with various timing issues right now, so I'll tell you when I solve it.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

huq02 commented 2 years ago

Hi, Why the hr value is so high? I donnot know the reason. Do you met the problem? the sample_rate is the Sampling frequency (Hz)? 60 or 100 or 120?

At 2022-05-05 17:54:33, "Daeyeol Kim" @.***> wrote:

Try to use bvp.bvp in biosppy. this function use elgendi algorithm.

Bvp.bvp(signal,sampling_rate)

https://biosppy.readthedocs.io/en/stable/biosppy.signals.html#biosppy.signals.bvp.bvp

2022년 5월 5일 (목) 오후 6:46, huq02 @.***>님이 작성:

Thanks. You have finished hr value estimate? Can you share the code?

At 2022-05-05 11:52:11, "Daeyeol Kim" @.***> wrote:

Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate.

Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY.

Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

— Reply to this email directly, view it on GitHub https://github.com/TVS-AI/Pytorch_rppgs/issues/8#issuecomment-1118366464, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQAS77E5A2EGLWQPXWXJCTVIOKGXANCNFSM5UUK6P5A . You are receiving this because you modified the open/close state.Message ID: @.***>

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

nizhezhiwei commented 2 years ago

您是否处理预处理VIPL_HR的问题。我发现了一个问题,我想和大家讨论一下:我不用val数据集。train_loss(p1_p10)可以收敛,MAE: 7.6 bpm(p1_v1_source3.hdf5), 但train_loss (p1_p46) 是楠。我不知道为什么,你有没有遇到过类似的问题?在 2022-05-30 11:28:24, “金达伊尔” @.***>写道:是的。我在实验中也有同样的情况。所以,有很多时间问题。我想这是使用长信号的原因之一。我现在正在尝试各种计时问题,所以我会在解决它时告诉你。— 直接回复此电子邮件,在 GitHub 上查看或取消订阅。您收到此消息是因为您创作了该线程。消息编号: >

You can get information in main/utils/seq_preprocess.py

Hello: I'm also studying rppg,I have achieved PhysNet、Physformer and RTRPPG. I used MSELoss and Neg PersonLoss to train them with UBFC dataset.However,the best result are as follows RMSE4.3,MAE3.3,PC0.96 ,do you have any better result? Besides,I don't know how to do preprocess for VIPL-dataset .Can I get your preprocessing sequence for VIPL-dataset ?

nizhezhiwei commented 2 years ago

Hi, Why the hr value is so high? I donnot know the reason. Do you met the problem? the sample_rate is the Sampling frequency (Hz)? 60 or 100 or 120? At 2022-05-05 17:54:33, "Daeyeol Kim" @.> wrote: Try to use bvp.bvp in biosppy. this function use elgendi algorithm. Bvp.bvp(signal,sampling_rate) https://biosppy.readthedocs.io/en/stable/biosppy.signals.html#biosppy.signals.bvp.bvp 2022년 5월 5일 (목) 오후 6:46, huq02 @.>님이 작성: Thanks. You have finished hr value estimate? Can you share the code? At 2022-05-05 11:52:11, "Daeyeol Kim" @.> wrote: Hi, You must find the peak in the BVP signal. I recommend elgendi algorithm. If you can find the peak(onesets), you can use IBI(peak to peak) to estimate heart rate. Elgendi Argorithm's implematation can be find in NeuroKit and BIOSPPY. Elgendi M, Norton I, Brearley M, Abbott D, Schuurmans D (2013) Systolic Peak Detection in Acceleration Photoplethysmograms Measured from Emergency Responders in Tropical Conditions. PLoS ONE 8(10): e76585. doi:10.1371/journal.pone.0076585. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.> — Reply to this email directly, view it on GitHub <#8 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGQAS77E5A2EGLWQPXWXJCTVIOKGXANCNFSM5UUK6P5A . You are receiving this because you modified the open/close state.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.>

Hello: I'm also studying rppg,I have achieved PhysNet、Physformer and RTRPPG. I used MSELoss and Neg PersonLoss to train them with UBFC dataset.However,the best result are as follows RMSE4.3,MAE3.3,PC0.96 ,do you have any better result? Besides,I don't know how to do preprocess for VIPL-dataset .Can I get your preprocessing sequence for VIPL-dataset ?