nguyetn89 / Anomaly_detection_ICCV2019

Anomaly Detection in Video Sequence with Appearance-Motion Correspondence
BSD 2-Clause "Simplified" License
53 stars 15 forks source link

Help needed for reproducing AUC #1

Closed miss-hikari closed 4 years ago

miss-hikari commented 4 years ago

Hi, thanks for making your work publicly!

Recently I'm building an idea upon your nice work, but failed to reproduce the frame-level performance(AUC) in UCSDPed2 dataset, while the images and flows generated look fine. 图片

The training details are listed here:

The scores are as follows:

                 PSNR_X,PSNR_inv,PSNR,SSIM,  MSE,   maxSE, std,  MSE_1c,maxSE_1c,std_1c
appearance AUCs: 0.844, 0.671, 0.780, 0.785, 0.787, 0.491, 0.652, 0.787, 0.491, 0.652
optic flow AUCs: 0.765, 0.873, 0.915, 0.529, 0.912, 0.940, 0.940, 0.912, 0.939, 0.941
combinatio AUCs: 0.810, 0.820, 0.905, 0.473, 0.901, 0.872, 0.928, 0.901, 0.906, 0.929
direction  AUCs: 0.696, 0.690, 0.694, 0.587, 0.690, 0.552, 0.741, 0.500, 0.500, 0.500
magnitude  AUCs: 0.783, 0.856, 0.909, 0.517, 0.907, 0.942, 0.937, 0.500, 0.500, 0.500

The highest score is 0.941, still far away from 0.962.

Am I missing something? Could you find something wrong in my procedures or settings? Another question is, which metric(AUC) above is applied in the paper?

nguyetn89 commented 4 years ago

Hi, thank you for your interest in my work!

Your AUC is different because the reported AUC in the paper was estimated on patchs of interest (described in section 3.5) while you estimated on whole frames.

Please notice that some other factors may also affect the AUC in your reproduction:

Finally, please notice that the published code was not well managed during my work (sorry about it!), therefore the functions providing AUCs in the paper might be slightly different.

miss-hikari commented 4 years ago

Thanks for prompt reply! I will estimate AUC on patches of interest later. The code you provided is really helpful!

HELLO-GAN-WORK commented 4 years ago

could you please share your requirement.txt or environment setting, python version, tensorflow version or ....

HELLO-GAN-WORK commented 4 years ago

could you please share your requirement.txt or environment setting, python version, tensorflow-gpu version or ....

nguyetn89 commented 4 years ago

could you please share your requirement.txt or environment setting, python version, tensorflow-gpu version or ....

Since the softwares and libraries on that computer were updated months ago, I cannot check the exact settings. As I remember, this work was performed with Python 3.5 and tensorflow-gpu 1.x.

HELLO-GAN-WORK commented 4 years ago

Thanks for prompt reply! , the last question is about CUDA version, this code is running in CUDA 8 or 9 or 10 ?

nguyetn89 commented 4 years ago

Thanks for prompt reply! , the last question is about CUDA version, this code is running in CUDA 8 or 9 or 10 ?

Hi, it should be CUDA 10.

CHANGHAI-AILab commented 4 years ago

Hi, thanks for making your nice work, I have obtained the .flo file from flownet caffe. now, I write this code to covert .flo file to numpy *.npz file. I am not sure what I have done is correct for the optical flow input of your framework?

dir_name='UCSDped2/Test_optical_flow_videos' for each_dir in os.listdir(dir_name): eacn_dit_path=os.path.join(dir_name,each_dir) each_video=[] for each_file in os.listdir(eacn_dit_path): eacn_flo_file_path=os.path.join(eacn_dit_path,each_file) with open(eacn_flo_file_path) as f: w, h = np.fromfile(f, np.int32, count=2) data = np.fromfile(f, np.float32, count=2wh) data2D = np.resize(data, (h, w, 2)) each_video.append(data2D) np.savez("{0}.npz".format(each_dir), data = np.array(each_video))

nguyetn89 commented 4 years ago

Hi, thanks for making your nice work, I have obtained the .flo file from flownet caffe. now, I write this code to covert .flo file to numpy *.npz file. I am not sure what I have done is correct for the optical flow input of your framework?

dir_name='UCSDped2/Test_optical_flow_videos' for each_dir in os.listdir(dir_name): eacn_dit_path=os.path.join(dir_name,each_dir) each_video=[] for each_file in os.listdir(eacn_dit_path): eacn_flo_file_path=os.path.join(eacn_dit_path,each_file) with open(eacn_flo_file_path) as f: w, h = np.fromfile(f, np.int32, count=2) data = np.fromfile(f, np.float32, count=2_w_h) data2D = np.resize(data, (h, w, 2)) each_video.append(data2D) np.savez("{0}.npz".format(each_dir), data = np.array(each_video))

Hi, you may write a short code to load the npz and visualize a few slices to check whether your above code runs correctly. In my work, I did not export and then import the flo files. I instead directly modified the main file of FlowNet2 so that it performed on sequential pairs of frames and then directly saved to a npz file. (I cannot come to the campus to check my code in details, but you can simply visualize some slices from your npz file to be sure.)

olaurendin commented 4 years ago

Hi miss-hikari,

I have tried to generate the flow for the UCSDped2 dataset using the same repository of FlowNet2. however I did not manage to have an appropiate flow out of it. Do you have an example code of your implementation?

(Sorry for asking it under the issues section of this repository, but I couldn't contact miss-hikari directly...)

miss-hikari commented 4 years ago

Hi olaurendin,

I have created a gist for you. Hope it helps. https://gist.github.com/miss-hikari/530c131b867f82abd2c0f1e62cb94f3c

nguyetn89 commented 4 years ago

@miss-hikari Thank you for your efficient gist!!! @olaurendin I also recently slightly adapted the FlowNet2-pytorch for another related work, you can take a look if you wish.

olaurendin commented 4 years ago

Thank you @nguyetn89, I'll have a look at your repo :)

kairikibear commented 4 years ago

@miss-hikari excuse me for bumping but do you happen to evaluate using the max-patch approach? what results do you get? I re-implemented the project in pytorch and the resulting AUC i got using the score as mentioned in #5 is far from the paper