lmb-freiburg / flownet2-docker

Dockerfile and runscripts for FlowNet 2.0 (estimation of optical flow)
https://lmb.informatik.uni-freiburg.de/Publications/2017/IMKDB17/
GNU General Public License v3.0
158 stars 52 forks source link

Different resolution of input images #3

Closed meegoStar closed 7 years ago

meegoStar commented 7 years ago

Hello, first of all, thanks a lot for providing docker version! It really saves me a lot of time for setting up the environment.

My question is that it seems that all the image pairs listed in flow_first_images.txt & flow_second_images.txt should be of the same resolution; otherwise while running run-network.sh the network would crash.

For example, suppose I have flow_first_images.txt like this:

A1.jpg
A2.jpg
A3.jpg
B1.jpg
B2.jpg
B3.jpg

with flow_second_images.txt being:

A2.jpg
A3.jpg
A4.jpg
B2.jpg
B3.jpg
B4.jpg

where all A*.jpg are frames saved from video A with resolution 406*720, and all B*.jpg are frames saved from video B with resolution 960*720. Then after starting run-network.sh, the network would crash when it switches from A part to B part.

I guess it is because the network is initialized based on the resolution of the first part of images, which is the part A in the above example. It is unable to deal with different resolution once it is initialized.

So I wonder if there exists a faster way than that I create different sets of flow_first_images.txt, flow_second_images.txt, and flow_output.txt files for several groups of images while each group with different resolution?

nikolausmayer commented 7 years ago

You're right, once initialized the network is fixed to one exact resolution. We need image widths and heights that are multiples of 64, and so we have to setup upsampling and downsampling layers before/after the core network. The flow output even needs special treatment.

I can think of 4 different workflows here:

meegoStar commented 7 years ago

Great! Thanks for your reply. I would try the 2nd method first. The 4th would be too much work for me now :D Big thanks again!