Closed PierfrancescoArdino closed 3 years ago
In the evaluation of Cityscapes, we just use the first 4 frames in each video for prediction. This is to be compatible with the prediction result of Vid2Vid. The authors of Vid2Vid send us their result, which is the prediction of the first 4 frames of each video.
I plan to add some explanations and reclean up the code. Since I'm busy with other ongoing projects, it won't be quick.
Ok thanks. If you can just put an example of directory structure would super appreciated.
I add a folder tree.
Hi,
I'm trying to re-train the background prediction model but I'm having some troubles with the static maps. Can you please give me some information about the structure of the folders or how I can generate this? I thought that I had to use the output of the moving object detector but I have some problems with the folder structure. Moreover it seems that the moving object detector predict just the first 4 frames of each cityscapes video as can been seen from this snippet:
After digging a bit I think one possible solution is to change the inner for loop to something like:
It makes sense?