Closed Dream999999 closed 2 months ago
Hi @Dream999999,
Regarding that large error for an individual video itself, it could very well be the case that there is significant error - can you share what video that is in particular (e.g., of the PURE videos)? For within dataset, or intra-dataset, training in particular you may find that you need to tweak the config settings / hyperparameters a bit - it's worth keeping that in mind if you're still using the default, cross-dataset config results for now.
Also, are you sure you have the latest changes in the repo? After this particular commit, you shouldn't notice that large standard error in RMSE anymore. Double-check to make sure you have this commit.
Thank you for your reply. I am using the PURE dataset, and the video labeled 09-01 is particularly unusual, resulting in poor prediction outcomes. Additionally, the code I am using was recently downloaded. I am using the test file PURE_UBFC-rPPG_PHYSNET_BASIC.yaml, where I have changed the dataset to PURE and set the parameters to begin0.8end0.9. This was solely for the purpose of testing the performance on the 09 folder. For the training test file PURE_PURE_UBFC-rPPG_PHYSNET_BASIC.yaml, I have changed it to use the single PURE dataset, divided according to a 6:2:2 ratio. Apart from these changes, I am using the default parameters in the configuration file.The problematic video is as follows, it is the first file of the ninth person (under the first scene). ![Uploading Image1392717442273527000.png…]()
@Dream999999,
Can you be more specific than "recently downloaded"? Did you clone it using git
, and if so, is the commit I previously mentioned in your git log
? I am fairly sure you are missing that commit if you are somehow able to get that large of a standard error for RMSE still, please do double-check this.
Also, I think it's totally possible that specific video has large error in intra-dataset testing - if you want to debug this further, you should try to investigate how that same video performs in cross-dataset testing scenarios. A good starting point would be to just use the relevant pre-trained models provided in this repo.
I'll go ahead and close this given the lack of further discussion, but do feel free to comment again or make a new issue if your concerns persist @Dream999999.
I apologize for disturbing you, and I have a question I’d like to consult with you. I am using the PURE dataset for intra-dataset testing, divided in a ratio of 6:2:2, and employing the PhysNet model. The resulting RMSE value after training is quite large. Upon printing out the values of the array obtained from the test file after FFT calculation, I found that there is a significant discrepancy between the predicted and actual values for the 09-01 data. However, for the other predictions, the numerical values are very close to the actual values.I’m not sure if there is an issue here, and I’d like to ask if this is normal? Also, I noticed the existence of a file named PURE_Comparison.csv in the wip folder within the code. I can compare my own results with this file. I also used the pre-trained model in the code to only test the 09 file, and the resulting RMSE value is also very large. The prediction outcome is similar to the one shown in the graph below.Thank you for your reply.