Closed e-xf closed 2 years ago
No, but you could have a single vector for the target. @eduardoftdo Do you still need help?
Hey @benedekrozemberczki yes, I could use some help!
My original idea was to extract hand landmarks data from multiple videos using MediaPipe. Each video has only 1 class in it, and I was generating a json file for each of those videos using the same format in the MTM example. My question here was to try and make this, but I wasn't able to and tried a different approach where I tried all of those different files as if it were a single video. I got a help from Alex Riedel and was able to create the dataset this way, but couldn't get a model to work.
What I would like to do is to go through all the individual files and create a StaticGraphTemporal Dataset with snapshots of say 100 frames to try and train a model to classify this snapshot in one of the classes. And is it possible to do the train val split based on each file? (For every video, we would save a part for validation)
To make it more clear, I uploaded here a sample of this data with 5 different json files, each file is from a unique video. Is it possible to create a dataset with data in this format and split it into train and validation, where we would have a 0.2 ratio of each class?
Hey @eduardoftdo , were you able to do that? I am trying to do something similar to you but can't seem to properly format the data. Also couldn't find any proper example of using StaticGraphTemporalSignalBatch
.
Hi everyone. I'm working on a project with hand landmark data similar to the one used in the MTMDatasetLoader.
However, I have multiple files where each file has graph representations over time, but only one target per file (differently from MTM where the target changes over time).
Is there any example of using this dataset loader to iterate over those multiple files and create a StaticGraphTemporalSignal to train a STGCN?
Thanks in advance and congratulations on the library!