Hi,
I have video data which is a synced video from 4 different views ( A quad splitscreen ). I want to run pose detection on one view and at the same time save the frames of individual views separately.
So, I defined a graph in scanner which will crop the four quadrants in the frame and apply pose detection to one of the quadrants. Below is the excerpt from the code.
The problem here is that if I use for example output_table.column("crop3").save_mp4() , a get a video of the cropped quadrant3. however, if I use output_table.column("crop3").load() or output_table.load([ "crop3"]) like in the code below to iterate over the frames and save them individually I always get the frame that has the pose drawn on it. i.e as per the code below sampled_frames stream.
Am I doing something wrong with the way I am defining the graph? or is it a bug?
Hi, I have video data which is a synced video from 4 different views ( A quad splitscreen ). I want to run pose detection on one view and at the same time save the frames of individual views separately. So, I defined a graph in scanner which will crop the four quadrants in the frame and apply pose detection to one of the quadrants. Below is the excerpt from the code.
The problem here is that if I use for example output_table.column("crop3").save_mp4() , a get a video of the cropped quadrant3. however, if I use output_table.column("crop3").load() or output_table.load([ "crop3"]) like in the code below to iterate over the frames and save them individually I always get the frame that has the pose drawn on it. i.e as per the code below sampled_frames stream.
Am I doing something wrong with the way I am defining the graph? or is it a bug?