Closed Ema81xsd closed 7 months ago
I have no ideas about what happened. Is there any error reported by the program?
none. If I have to guess the program is not able to read the video, which is weird since the video is the demo, I have all the dependencies and the path are correct. (it happens with any video, any format) Maybe the audio of the video is giving issue? Could that be? I am not sure. This is why I ask if can we generate the smoothed with another code such as Flame. What do you suggest?
If nothing is generated, I think FFMPEG may be the problem. I think maybe the torchvision.io.VideoReader can't read the video file. You can check the "build_video" func in lightning_track/engines/core_engine.py line 43. For other code, I think there is no ready-to-use way to generate the needed file.
ok I sorted. Is there a length limit to the final render? Also, if we need to, I assume we can have the results without watermark?
Great, congratulations.🎉 I think there is no special limit on the length of the video, but a video that is too long may cause errors such as memory overflow when tracking and inferring (perhaps when it exceeds 10k frames?). For academic use, it is OK to remove the watermark for testing or something.
I´m having issue using Lightning_track. For some reason the results are skipped (although I followed the instructions, downloaded all the models and testing with the original demo video ). Process starts with Mediapipe but it is skipped, same for the lighting. Folders are created but no file is created. Any suggestion? Is there another repo, for example Flame, that we can use to generate the smoothed file?