Closed lushihan closed 6 years ago
I don't explicitly convert video into an image sequence, rather I use opencv's read() to read the video by frame and do inference on each of those frames iteratively.
The model outputs data points for one frame at a time, I plot the frame and the contours on a matplotlib canvas, and render the canvas it to an image file. After all frames are processed, I use ffmpeg to generate video using these image files.
Thanks! Since I met some bugs following your instructions, do you put your input video file in the directory of your "infer_simply.py"?
Yes, and you can put your video anywhere you want, just pass the location to cv2.VideoCapture().
There are a number of issues with the script.
I apologize for the clumsiness of the script but I intend to address these in the future.
Edit: I have updated the script. It shouldn't generate any more errors.
Hello,
Thanks for sharing your work! Do you convert the video into the image sequence and then dump them to the model?
After you get the inferred images, you transform them into video again?