Closed miss-hikari closed 4 years ago
Hi, thank you for your interest in my work!
Your AUC is different because the reported AUC in the paper was estimated on patchs of interest (described in section 3.5) while you estimated on whole frames.
Please notice that some other factors may also affect the AUC in your reproduction:
Finally, please notice that the published code was not well managed during my work (sorry about it!), therefore the functions providing AUCs in the paper might be slightly different.
Thanks for prompt reply! I will estimate AUC on patches of interest later. The code you provided is really helpful!
could you please share your requirement.txt or environment setting, python version, tensorflow version or ....
could you please share your requirement.txt or environment setting, python version, tensorflow-gpu version or ....
could you please share your requirement.txt or environment setting, python version, tensorflow-gpu version or ....
Since the softwares and libraries on that computer were updated months ago, I cannot check the exact settings. As I remember, this work was performed with Python 3.5 and tensorflow-gpu 1.x.
Thanks for prompt reply! , the last question is about CUDA version, this code is running in CUDA 8 or 9 or 10 ?
Thanks for prompt reply! , the last question is about CUDA version, this code is running in CUDA 8 or 9 or 10 ?
Hi, it should be CUDA 10.
Hi, thanks for making your nice work, I have obtained the .flo file from flownet caffe. now, I write this code to covert .flo file to numpy *.npz file. I am not sure what I have done is correct for the optical flow input of your framework?
dir_name='UCSDped2/Test_optical_flow_videos' for each_dir in os.listdir(dir_name): eacn_dit_path=os.path.join(dir_name,each_dir) each_video=[] for each_file in os.listdir(eacn_dit_path): eacn_flo_file_path=os.path.join(eacn_dit_path,each_file) with open(eacn_flo_file_path) as f: w, h = np.fromfile(f, np.int32, count=2) data = np.fromfile(f, np.float32, count=2wh) data2D = np.resize(data, (h, w, 2)) each_video.append(data2D) np.savez("{0}.npz".format(each_dir), data = np.array(each_video))
Hi, thanks for making your nice work, I have obtained the .flo file from flownet caffe. now, I write this code to covert .flo file to numpy *.npz file. I am not sure what I have done is correct for the optical flow input of your framework?
dir_name='UCSDped2/Test_optical_flow_videos' for each_dir in os.listdir(dir_name): eacn_dit_path=os.path.join(dir_name,each_dir) each_video=[] for each_file in os.listdir(eacn_dit_path): eacn_flo_file_path=os.path.join(eacn_dit_path,each_file) with open(eacn_flo_file_path) as f: w, h = np.fromfile(f, np.int32, count=2) data = np.fromfile(f, np.float32, count=2_w_h) data2D = np.resize(data, (h, w, 2)) each_video.append(data2D) np.savez("{0}.npz".format(each_dir), data = np.array(each_video))
Hi, you may write a short code to load the npz and visualize a few slices to check whether your above code runs correctly. In my work, I did not export and then import the flo files. I instead directly modified the main file of FlowNet2 so that it performed on sequential pairs of frames and then directly saved to a npz file. (I cannot come to the campus to check my code in details, but you can simply visualize some slices from your npz file to be sure.)
Hi miss-hikari,
I have tried to generate the flow for the UCSDped2 dataset using the same repository of FlowNet2. however I did not manage to have an appropiate flow out of it. Do you have an example code of your implementation?
(Sorry for asking it under the issues section of this repository, but I couldn't contact miss-hikari directly...)
Hi olaurendin,
I have created a gist for you. Hope it helps. https://gist.github.com/miss-hikari/530c131b867f82abd2c0f1e62cb94f3c
@miss-hikari Thank you for your efficient gist!!! @olaurendin I also recently slightly adapted the FlowNet2-pytorch for another related work, you can take a look if you wish.
Thank you @nguyetn89, I'll have a look at your repo :)
@miss-hikari excuse me for bumping but do you happen to evaluate using the max-patch approach? what results do you get? I re-implemented the project in pytorch and the resulting AUC i got using the score as mentioned in #5 is far from the paper
Hi, thanks for making your work publicly!
Recently I'm building an idea upon your nice work, but failed to reproduce the frame-level performance(AUC) in UCSDPed2 dataset, while the images and flows generated look fine.
The training details are listed here:
The scores are as follows:
The highest score is 0.941, still far away from 0.962.
Am I missing something? Could you find something wrong in my procedures or settings? Another question is, which metric(AUC) above is applied in the paper?