Closed LL-Zs closed 2 months ago
Hi @LL-Zs,
Can you elaborate on "speed measurement method for a point"? Is this for measuring inference time of a supervised or unsupervised approach, whether that includes an end-to-end measure or something that excludes steps such as pre-processing or post-processing? Or are you referring to something else?
Thank you for your reply. Our work is cross dataset, so we would like to compare the time required to train on dataset A and test on dataset B. Initially, we wanted to compare the time required to process one frame, but the results we obtained did not match the training time and parameter count. Therefore, we would like to seek a better testing method.
I'm still a bit confused by the way you described this - aren't you really just interested in the time it takes to train a model given a specific dataset and the time it takes to do test-time inference? Of course, both of these can be influenced by certain factors (e.g., train-time batch size and test-time batch size, both of which are defined in example training configs).
The way you phrased comparing "the time required to train on dataset A and test on dataset B" seems a bit strange since those both seem like two separate things - a total model training duration and a test-time inference time. The former is likely to vary a fair bit depending on the dataset involved (e.g., PURE versus SCAMPS), but the latter is probably some reasonable average in milliseconds (which you could maybe measure on some kind of target device, such as a Raspberry Pi) using a batch size of 1 or by diving a measurement by the batch size.
We only compare the test time, but the test time obtained by using the method in the official code is not consistent with the amount of data. After our modification, the data set has more than doubled, but the test time has doubled, and the training time has increased with the increase of the parameter size, so we want to find another method that can be used to compare the speed
---Original--- From: "Akshay @.> Date: Mon, Aug 12, 2024 11:52 AM To: @.>; Cc: @.**@.>; Subject: Re: [ubicomplab/rPPG-Toolbox] Find an official speed measurementmethod for a point (Issue #303)
I'm still a bit confused by the way you described this - aren't you really just interested in the time it takes to train a model given a specific dataset and the time it takes to do test-time inference? Of course, both of these can be influenced by certain factors (e.g., train-time batch size and test-time batch size, both of which are defined in example training configs).
The way you phrased comparing "the time required to train on dataset A and test on dataset B" seems a bit strange since those both seem like two separate things - a total model training duration and a test-time inference time. The former is likely to vary a fair bit depending on the dataset involved (e.g., PURE versus SCAMPS), but the latter is probably some reasonable average in milliseconds (which you could maybe measure on some kind of target device, such as a Raspberry Pi).
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Hi @LL-Zs,
Could you please explain to me the specific timing experiment you are attempting to conduct?
Is it something along the lines of one of the following:
Thanks, Girish
Thank you for your reply. We recorded the time to train an epoch and the reasoning time to test (the first row in the figure). The time to train an epoch can intuitively see the speed change before and after modification, and the reasoning time is used to calculate the time required for each frame (the reasoning time is divided by the total number of video frames). For example, we added a small module to the original code, the number of parameters changed from 20,000 to more than 50,000, the time to train an epoch became 2 seconds longer, but the inference time changed from 63 seconds to 32 seconds.
---Original--- From: "Girish @.> Date: Tue, Aug 13, 2024 04:35 AM To: @.>; Cc: @.**@.>; Subject: Re: [ubicomplab/rPPG-Toolbox] Find an official speed measurementmethod for a point (Issue #303)
Hi @LL-Zs,
Could you please explain to me the specific timing experiment you are attempting to conduct?
Is it something along the lines of one of the following:
training time to convergence
model through put (eg. frames per second)
model compute operations (eg. flops)
Thanks, Girish
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Given the lack of further discussion, I'll go ahead and close this issue. Feel free to reply back if there are more concerns or make a new issue if needed.
Hello, we have done some work on the methods provided by Toolbox. However, as no specific speed measurement method has been mentioned in any of the papers, we are currently struggling to find a suitable speed measurement method. We are seeking help from Toolbox officials to see if there is a widely recognized speed measurement method.