smellslikeml / ActionAI

Real-Time Spatio-Temporally Localized Activity Detection by Tracking Body Keypoints
https://www.hackster.io/actionai/actionai-custom-tracking-multiperson-activity-recognition-fa5cb5
GNU General Public License v3.0
765 stars 191 forks source link

train_sequential dataset #61

Open JJLimmm opened 1 year ago

JJLimmm commented 1 year ago

Hi @smellslikeml ,

I have read through the README.md provided but would like to clarify on some things that are not mentioned in it.

  1. For the dataset, inside each subdirectory which is the label for which we want to classify the action, do we put in the sequence of images that constitute the action (eg: squatting) or do we only put in images of people in the squat position?
  2. Related to the 1st Qns above, are we able to put in more than 1 sequence of squatting if provided we need to put in a sequence of images?
  3. Do we only have to change the conf.py file when using train_sequential? what are the list of things we need to modify?

Thank you!

smellslikeml commented 1 year ago

Hi, For the first question, we've recorded a sequence of video frames to the labeled directory while performing the task repeatedly.

As it relates to the second, we do use many repetitions in the training sequence but you can also use more video clips for more variety in perspective. Here, you may want to include some logic to ensure your training segments do not include frames from different sessions.

It's been a while since we tried training with train_sequential, we'll give it a run and report back and/or update the README with some notes.

I hope this helps and thank you for your feedback!

Cheers!

On Mon, Sep 26, 2022 at 6:29 PM JJ Lim | Eugene @.***> wrote:

Hi @smellslikeml https://github.com/smellslikeml @cclauss https://github.com/cclauss @gitter-badger https://github.com/gitter-badger @mayorquinmachines https://github.com/mayorquinmachines ,

I have read through the README.md provided but would like to clarify on some things that are not mentioned in it.

  1. For the dataset, inside each subdirectory which is the label for which we want to classify the action, do we put in the sequence of images that constitute the action (eg: squatting) or do we only put in images of people in the squat position?
  2. Related to the 1st Qns above, are we able to put in more than 1 sequence of squatting if provided we need to put in a sequence of images?
  3. Do we only have to change the conf.py file when using train_sequential? what are the list of things we need to modify?

Thank you!

— Reply to this email directly, view it on GitHub https://github.com/smellslikeml/ActionAI/issues/61, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACFAHK2IEP5HNNU73BBSGALWAJEW5ANCNFSM6AAAAAAQWKCAHM . You are receiving this because you were mentioned.Message ID: @.***>

JJLimmm commented 1 year ago

Hi @smellslikeml ,

Thanks for sharing more details on the work flow for this repo. I have another question and that pertains to the classifier. Is the dataset preparation the same as to training for the LSTM? (eg: sequence of images rather than just images of the action) Or do i only need to include images of the action alone and the label for the actions as the folder name?

For preprocessing the dataset to output the csv file, the preprocess.py file seems like it is only for preparing the data for the LogisticRegression classifier and not for the LSTM. How did you prepare the data for training the LSTM model?

Thank you!

JJLimmm commented 1 year ago

@smellslikeml Oh and also, for the classifier.sav model, what type of classifier are you using?

And if i want to classify more than 2 classes (eg: 5 classes: squats, lunge, walking, standing, sitting), what do i need to change to train a new classifier?

Thanks!

cclauss commented 1 year ago

I am not a maintainer of this repo so please remove the @mention of my name.

smellslikeml commented 1 year ago

The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

JJLimmm commented 1 year ago

The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py'

@smellslikeml So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?) if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image?

mayorquinmachines commented 1 year ago

Yes, that's right - .sav is from scikit-learn, .h5 from tf.keras. If training a classifier from scikit-learn, you could use a sequence of pose estimations.

On Wed, Sep 28, 2022 at 9:03 PM JJ Lim | Eugene @.***> wrote:

The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying.

You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py' @smellslikeml https://github.com/smellslikeml So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?) if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image?

— Reply to this email directly, view it on GitHub https://github.com/smellslikeml/ActionAI/issues/61#issuecomment-1261725688, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADEIHYSRSPE4FNVCLIBCJ5LWAUIJVANCNFSM6AAAAAAQWKCAHM . You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- Salma Mayorquin University of California, Berkeley Applied Mathematics (310) 977-9332 @.***

JJLimmm commented 1 year ago

Yes, that's right - .sav is from scikit-learn, .h5 from tf.keras. If training a classifier from scikit-learn, you could use a sequence of pose estimations. On Wed, Sep 28, 2022 at 9:03 PM JJ Lim | Eugene @.> wrote: The .sav format was for saving models from the scikit-learn framework. These kinds of activities (squat, lunge, etc) are good for ActionAI since they are well-characterized by body pose and relatively slowly varying. You only need to add samples to the training workflow or add buttons to the PS3 controller configuration in mapping defined by activity_dict in `experimental/config.py' @smellslikeml https://github.com/smellslikeml So for the .sav format and .h5 format they are actually just from different frameworks? (scikit-learn and tf.keras respectively?) if training the classifier from scikit-learn, do we then have to put in sequence of images? or just images of the instance the action is in the image? — Reply to this email directly, view it on GitHub <#61 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADEIHYSRSPE4FNVCLIBCJ5LWAUIJVANCNFSM6AAAAAAQWKCAHM . You are receiving this because you are subscribed to this thread.Message ID: @.> -- Salma Mayorquin University of California, Berkeley Applied Mathematics (310) 977-9332 @.***

Hi @mayorquinmachines ,

Thanks for clarifying! But if i were to classify 5 classes (squats, lunges, walking,sitting, standing), wouldnt a sequence of images confuse the classifier if let's say i were to use the KNN Classifier from scikit-learn?

smellslikeml commented 1 year ago

yes, thank you for pointing this out - the LSTM classifier's input is a sequence of features extracted from the images. We'll update with a preprocessing example on this soon.

On Tue, Sep 27, 2022 at 6:22 PM JJ Lim | Eugene @.***> wrote:

Hi @smellslikeml https://github.com/smellslikeml ,

Thanks for sharing more details on the work flow for this repo. I have another question and that pertains to the classifier. Is the dataset preparation the same as to training for the LSTM? (eg: sequence of images rather than just images of the action) Or do i only need to include images of the action alone and the label for the actions as the folder name?

Thank you!

— Reply to this email directly, view it on GitHub https://github.com/smellslikeml/ActionAI/issues/61#issuecomment-1260271845, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACFAHK7O3VSV77U2HW7PNGLWAOMUFANCNFSM6AAAAAAQWKCAHM . You are receiving this because you were mentioned.Message ID: @.***>