PeterL1n / RobustVideoMatting

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
https://peterl1n.github.io/RobustVideoMatting/
GNU General Public License v3.0
8.53k stars 1.13k forks source link

没看懂代码是怎么利用时间信息的,代码里的R1,r2,r3,r4。没见到过有被复用 #208

Closed josnname closed 1 year ago

josnname commented 1 year ago

image image image r1,r2,r3,r4重来没被复用过。所以一直都是全为0的张量

PeterL1n commented 1 year ago

During training, the input is a five-dimensional tensor [B, T, C, H, W]. Multiple frames are given in T together and there is an internal for loop. r1, r2, r3, r4 are never used for training. They are used for inference when your video sequence is much longer and you cannot fit all frames to the GPU, then you can fit T frames as a batch and cycle the rX tensors.

josnname commented 1 year ago

看懂了,感谢您的解答