This is actually something I use all the time in the Python version, here is how it looks there:
mgmodule.MgObject('dance.avi', starttime=5, endtime=15)
which will only use the region from the 5th to the 15th second in the timeline of the source video file for all the processing what follows.
Ideally it would be nice if the playbar could reflect the selection, but I think it is also important to have two number boxes available for this.
I guess it is quite typical for recording sessions with humans that the first and last few seconds of the recording is (scientifically) "garbage", so it is a must-have feature to be able to get rid of these parts easily.
In the Python version this is called "trimming", and the two values are "starttime" and "endtime", measured in seconds, I recommend us to stick to this nomenclature in Max as well.
Yes. Good idea. For non-realtime trimming, the framedump message offers arguments that determine startframe and endframe (i think..). Worth looking into.
This is actually something I use all the time in the Python version, here is how it looks there:
mgmodule.MgObject('dance.avi', starttime=5, endtime=15)
which will only use the region from the 5th to the 15th second in the timeline of the source video file for all the processing what follows. Ideally it would be nice if the playbar could reflect the selection, but I think it is also important to have two number boxes available for this. I guess it is quite typical for recording sessions with humans that the first and last few seconds of the recording is (scientifically) "garbage", so it is a must-have feature to be able to get rid of these parts easily. In the Python version this is called "trimming", and the two values are "starttime" and "endtime", measured in seconds, I recommend us to stick to this nomenclature in Max as well.