Open saranzeb opened 2 years ago
Sorry, as the code is a bit scrappy format. One day hopefully we will fix those :). However, the answers you are looking for are already in the following code. Please read the headline of the block to understand its purpose.
https://github.com/mxahan/project_rppg/blob/master/Codes/mtlnet_train_rppg_owncol_cleaned.py
In lines 460 and 461, we created two networks. You can think of them as two networks with shared mtl_body, however, they have different heads. There you have it, two heads working in parallel.
To run the MTL net, we need to load another dataset and store it in another trX, trY (line 188 and some consequent lines). Once you have it, just run train_nn function (line 291). You should see that two networks (nerual_net1 and neural_net2) running one after another (standard MTL practice).
Hope this helps.
alright sir thank you for your detailed answered so far I understand that we have to manually (no iteration loop) create another train_data1 dataset from subject2 (person2) for which we have to repeat all the process of subject1 for subject2 and also have to manually add the neural network2 ? in define train_nn as
def train_nn(neural_net1, neural_net2, train_data,train_data1)
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps),train_data1.take(training_steps),1):
# body + Head1 training
run_optimization(neural_net1, batch_x, batch_y)
# body + Head2 training
run_optimization(neural_net2, batch_x, batch_y)
is this the way ? but in papaer you have alos mentioned that while training one head,we freeze the other head network's weights so how we can do this with above scripts ?
alright sir thank you for your detailed answered so far I understand that we have to manually (no iteration loop) create another train_data1 dataset from subject2 (person2) for which we have to repeat all the process of subject1 for subject2 and also have to manually add the neural network2 ? in define train_nn as
def train_nn(neural_net1, neural_net2, train_data,train_data1)
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps),train_data1.take(training_steps),1):
run_optimization(neural_net1, batch_x, batch_y)
# body + Head2 training
run_optimization(neural_net2, batch_x, batch_y)
is this the way ? but in papaer you have alos mentioned that while training one head,we freeze the other head network's weights so how we can do this with above scripts ?
On Fri, Mar 4, 2022 at 2:01 AM Zahid Hasan @.***> wrote:
Sorry, as the code is a bit scrappy format. One day hopefully we will fix those :). However, the answers you are looking for are already in the following code. Please read the headline of the block to understand its purpose.
https://github.com/mxahan/project_rppg/blob/master/Codes/mtlnet_train_rppg_owncol_cleaned.py
1.
In lines 460 and 461, we created two networks. You can think of them as two networks with shared mtl_body, however, they have different heads. There you have it, two heads working in parallel. 2.
To run the MTL net, we need to load another dataset and store it in another trX, trY (line 188 and some consequent lines). Once you have it, just run train_nn function (line 291). You should see that two networks (nerual_net1 and neural_net2) running one after another (standard MTL practice).
Hope this helps.
— Reply to this email directly, view it on GitHub https://github.com/mxahan/project_rppg/issues/2#issuecomment-1058330105, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS6RVRKMCQGNPBJODCWNPM3U6D46LANCNFSM5PZ7RWXA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
Hello sir sorry i have one more question regrading training steps, is the training steps means epochs in your code because the umber is very big ie 80000, does it mean we are training the model for 80000 epochs ? if yes is it normal to train the model for such a high number of epochs
On Fri, Mar 25, 2022 at 1:35 PM Saran Zeb @.***> wrote:
alright sir thank you for your detailed answered so far I understand that we have to manually (no iteration loop) create another train_data1 dataset from subject2 (person2) for which we have to repeat all the process of subject1 for subject2 and also have to manually add the neural network2 ? in define train_nn as
def train_nn(neural_net1, neural_net2, train_data,train_data1)
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps),train_data1.take(training_steps),1):
body + Head1 training
run_optimization(neural_net1, batch_x, batch_y) # body + Head2 training run_optimization(neural_net2, batch_x, batch_y)
is this the way ? but in papaer you have alos mentioned that while training one head,we freeze the other head network's weights so how we can do this with above scripts ?
On Fri, Mar 4, 2022 at 2:01 AM Zahid Hasan @.***> wrote:
Sorry, as the code is a bit scrappy format. One day hopefully we will fix those :). However, the answers you are looking for are already in the following code. Please read the headline of the block to understand its purpose.
https://github.com/mxahan/project_rppg/blob/master/Codes/mtlnet_train_rppg_owncol_cleaned.py
1.
In lines 460 and 461, we created two networks. You can think of them as two networks with shared mtl_body, however, they have different heads. There you have it, two heads working in parallel. 2.
To run the MTL net, we need to load another dataset and store it in another trX, trY (line 188 and some consequent lines). Once you have it, just run train_nn function (line 291). You should see that two networks (nerual_net1 and neural_net2) running one after another (standard MTL practice).
Hope this helps.
— Reply to this email directly, view it on GitHub https://github.com/mxahan/project_rppg/issues/2#issuecomment-1058330105, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS6RVRKMCQGNPBJODCWNPM3U6D46LANCNFSM5PZ7RWXA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you authored the thread.Message ID: @.***>
The training code part takes care of it automatically. As you can see when one network head is trained the other head sits still (act as frozen.)
On Fri, Mar 25, 2022 at 1:43 AM saranzeb @.***> wrote:
Hello sir sorry i have one more question regrading training steps, is the training steps means epochs in your code because the umber is very big ie 80000, does it mean we are training the model for 80000 epochs ? if yes is it normal to train the model for such a high number of epochs
On Fri, Mar 25, 2022 at 1:35 PM Saran Zeb @.***> wrote:
alright sir thank you for your detailed answered so far I understand that we have to manually (no iteration loop) create another train_data1 dataset from subject2 (person2) for which we have to repeat all the process of subject1 for subject2 and also have to manually add the neural network2 ? in define train_nn as
def train_nn(neural_net1, neural_net2, train_data,train_data1)
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps),train_data1.take(training_steps),1):
body + Head1 training
run_optimization(neural_net1, batch_x, batch_y)
body + Head2 training
run_optimization(neural_net2, batch_x, batch_y)
is this the way ? but in papaer you have alos mentioned that while training one head,we freeze the other head network's weights so how we can do this with above scripts ?
On Fri, Mar 4, 2022 at 2:01 AM Zahid Hasan @.***> wrote:
Sorry, as the code is a bit scrappy format. One day hopefully we will fix those :). However, the answers you are looking for are already in the following code. Please read the headline of the block to understand its purpose.
https://github.com/mxahan/project_rppg/blob/master/Codes/mtlnet_train_rppg_owncol_cleaned.py
1.
In lines 460 and 461, we created two networks. You can think of them as two networks with shared mtl_body, however, they have different heads. There you have it, two heads working in parallel. 2.
To run the MTL net, we need to load another dataset and store it in another trX, trY (line 188 and some consequent lines). Once you have it, just run train_nn function (line 291). You should see that two networks (nerual_net1 and neural_net2) running one after another (standard MTL practice).
Hope this helps.
— Reply to this email directly, view it on GitHub < https://github.com/mxahan/project_rppg/issues/2#issuecomment-1058330105>, or unsubscribe < https://github.com/notifications/unsubscribe-auth/AS6RVRKMCQGNPBJODCWNPM3U6D46LANCNFSM5PZ7RWXA
. Triage notifications on the go with GitHub Mobile for iOS < https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
You are receiving this because you authored the thread.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/mxahan/project_rppg/issues/2#issuecomment-1078675046, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHJBZ4SMYASBOQXP26WE26LVBVG7LANCNFSM5PZ7RWXA . You are receiving this because you commented.Message ID: @.***>
hello sir Hope you are doing well I need a little Help information regarding Heed a little bit information about the code of MTL model ? is it this one mtl_body = MtlNetwork_body() head1 = MtlNetwork_head(num_classes) head2 = MtlNetwork_head(num_classes) i have tried it, but i don't know how to run the both heads in parallel after the "MtlNetwork_body" as far i know there is no code in your repository which have run the two heads in parallel secondly how to prepare the data for MTL ,model for multiple persons i didn't find in your code which have extracted the vedios and correspond PPg from multiple subjects and have tried on shared network