Closed jeffbaena closed 5 years ago
You can modify the __main__
part to load and process your sequence of frames. As such, the model will not have to be loaded over and over again since you will only have to run the script once.
https://github.com/sniklaus/pytorch-pwc/blob/da45293e1e0c125858d56295a41124dea36bc471/run.py#L309
Pro tip, you could use inter-process communication (using ZMQ, for example) to allow your program that requires the estimated optical flow to communicate with the code provided in this repository. This is the pattern that I am using throughout my work and I am fairly happy with it.
Thanks so much!
Hi! I have changed the main function but it seems it does not process the images in order. Do you have any ideas? Thank you. The images are named 1.png, 2.png, 3.png...... Here is the code:
if __name__ == '__main__':
files = glob.glob('./images/*.png')
i = 0
while i < (len(files)-1):
tenOne = torch.FloatTensor(numpy.ascontiguousarray(numpy.array(PIL.Image.open(files[i]))[:, :, ::-1].transpose(2, 0, 1).astype(numpy.float32) * (1.0 / 255.0)))
tenTwo = torch.FloatTensor(numpy.ascontiguousarray(numpy.array(PIL.Image.open(files[i+1]))[:, :, ::-1].transpose(2, 0, 1).astype(numpy.float32) * (1.0 / 255.0)))
tenOutput = estimate(tenOne, tenTwo)
#arguments_strOut = './out.flo'
objOutput = open("out\\flow\\"+str(i+1)+".flo", 'wb')
numpy.array([ 80, 73, 69, 72 ], numpy.uint8).tofile(objOutput)
numpy.array([ tenOutput.shape[2], tenOutput.shape[1] ], numpy.int32).tofile(objOutput)
numpy.array(tenOutput.numpy().transpose(1, 2, 0), numpy.float32).tofile(objOutput)
objOutput.close()
i = i+1
I am just skimming through the code, but could it be that your "files" array is the problem? Also, are you sure that files[i] and files[i+1] actually contain F(n) and F(n+1)?
I am just skimming through the code, but could it be that your "files" array is the problem? Also, are you sure that files[i] and files[i+1] actually contain F(n) and F(n+1)?
You're right! The files are not in order! My fault. 😂 Thanks anyway!
Assuming run is the run script provided by sniklaus, the following script works for me (hopefully I did not forget anything). I have used this for a long time.
import run (you need to adjust the paths to import)
def inference(frL, frR): tensorFirst = torch.FloatTensor(np.array(frL)[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) (1.0 / 255.0)) tensorSecond = torch.FloatTensor(np.array(frR)[:, :, ::-1].transpose(2, 0, 1).astype(np.float32) (1.0 / 255.0)) flow = run.estimate(tensorFirst, tensorSecond) return np.array(flow.numpy().transpose(1, 2, 0), np.float32)
Hi! I have changed the main function but it seems it does not process the images in order. Do you have any ideas? Thank you. The images are named 1.png, 2.png, 3.png...... Here is the code:
if __name__ == '__main__': files = glob.glob('./images/*.png') i = 0 while i < (len(files)-1): tenOne = torch.FloatTensor(numpy.ascontiguousarray(numpy.array(PIL.Image.open(files[i]))[:, :, ::-1].transpose(2, 0, 1).astype(numpy.float32) * (1.0 / 255.0))) tenTwo = torch.FloatTensor(numpy.ascontiguousarray(numpy.array(PIL.Image.open(files[i+1]))[:, :, ::-1].transpose(2, 0, 1).astype(numpy.float32) * (1.0 / 255.0))) tenOutput = estimate(tenOne, tenTwo) #arguments_strOut = './out.flo' objOutput = open("out\\flow\\"+str(i+1)+".flo", 'wb') numpy.array([ 80, 73, 69, 72 ], numpy.uint8).tofile(objOutput) numpy.array([ tenOutput.shape[2], tenOutput.shape[1] ], numpy.int32).tofile(objOutput) numpy.array(tenOutput.numpy().transpose(1, 2, 0), numpy.float32).tofile(objOutput) objOutput.close() i = i+1
The images should be named 0001.png, 0002.png, since the glob.glob('./images/*.png')
function does not follow the order of 1, 2, 3, 4....... or you can use str(i).zfill(4) to change the format.
glob.glob(...)
does not preserve the order so you probably want sorted(glob.glob(...))
.
Hi @sniklaus, thanks for your support. I have one more (probably newbbie) question. How can I inference a batch of frames ? The model expects two filepaths. Can I give it a list like in caffe? Currently the model must be loaded and it takes around 30s per frame.
Thanks, Stefano