donggong1 / memae-anomaly-detection

MemAE for anomaly detection. -- Gong, Dong, et al. "Memorizing Normality to Detect Anomaly: Memory-augmented Deep Autoencoder for Unsupervised Anomaly Detection". ICCV 2019.
https://donggong1.github.io/anomdec-memae.html
MIT License
465 stars 103 forks source link

Testing AUC on Ped2 Unmatched #13

Open Wolfybox opened 4 years ago

Wolfybox commented 4 years ago

I had run the testing script on Ped2 dataset and got an auc of only around 85.1%.

WYZhang999 commented 4 years ago

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

WYZhang999 commented 4 years ago

Thank you very much.

Wolfybox commented 4 years ago

Can you share the Ped2 data preparation code or give me some guidance? I'm stuck in Ped2 data preparation for many weeks.

def gen_frame_index(clip_len=16): vfolder = r'F:\dataset\UCSD\ped2\testing\frames' save_dir = r'F:\dataset\UCSD\ped2\testing\indices' for vname in tqdm(os.listdir(vfolder)): vdir = os.path.join(vfolder, vname) flist = sorted(os.listdir(vdir)) fnum = len(flist) clip_num = math.ceil(fnum / clip_len) clip_num_len = len(str(clip_num)) target_dir = os.path.join(save_dir, vname) if not os.path.exists(target_dir): os.makedirs(target_dir) for clip_i in range(clip_num): start_fi = clip_i clip_len end_fi = (clip_i + 1) clip_len if (clip_i + 1) * clip_len < fnum else fnum clip_list = flist[start_fi: end_fi] clip_list = np.array(clip_list) np.save(os.path.join(target_dir, f'{str(clip_i).zfill(clip_num_len)}.npy'), clip_list)

Well, this is how i generate the so-called indices which the author's code required. However, to use these indices, you will also have to modify a few lines of code in the 'video_dataset.py'. Actually the data preparation is not troublesome at all. The basic logic is simple: the frames' indices(or more specifically the name of images in the frame folder) are divided by clip and then saved to individual clip indices file.

WYZhang999 commented 4 years ago

You are so nice! I still have a lot to learn.Have a nice day!

WYZhang999 commented 4 years ago

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

Wolfybox commented 4 years ago

Execuse me.It's me again.Can you please share the training codes? I want to learn about it and I really appreciate you.

Welp, I haven't done the training part. :P

WYZhang999 commented 4 years ago

Okay,fine.Thank you again~

WYZhang999 commented 4 years ago

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

Wolfybox commented 4 years ago

Execuse me.I want to use these indices but I failed to modify codes in the 'video_datasets.py'. Can I see your modified code in the 'video_datasets.py'?

`class VideoDatasetOneDir(Dataset): def init(self, idx_dir, frame_root, is_testing=False, use_cuda=False, transform=None): self.idx_dir = idx_dir self.frame_root = frame_root self.idx_name_list = [name for name in os.listdir(self.idx_dir)] self.idx_name_list.sort() self.use_cuda = use_cuda self.transform = transform self.is_testing = is_testing

def __len__(self):
    return len(self.idx_name_list)

def __getitem__(self, clip_idx):
    """ get a video clip with stacked frames indexed by the (idx) """
    idx_name = self.idx_name_list[clip_idx]
    frame_idx = np.load(os.path.join(self.idx_dir, idx_name))
    v_dir = self.frame_root

    sample_frame = cv2.imread(os.path.join(v_dir, frame_idx[0]), cv2.IMREAD_GRAYSCALE)

    sample_frame_shape = sample_frame.shape
    h = sample_frame_shape[0]
    w = sample_frame_shape[1]

    # each sample is concatenation of the indexed frames
    clip = []
    for fname in frame_idx:
        cur_frame = cv2.imread(os.path.join(v_dir, fname), cv2.IMREAD_GRAYSCALE)
        cur_frame = cv2.resize(cur_frame, (w + 8, h), cv2.INTER_CUBIC)
        cur_frame = torch.from_numpy(cur_frame)
        clip.append(cur_frame)
    if len(clip) < 16:
        clip += [clip[-1]] * (16 - len(clip))
    clip = torch.stack(clip, dim=0)
    clip = clip.unsqueeze(dim=0).float()
    return clip_idx, clip`
WYZhang999 commented 4 years ago

Thank you very much!!!Have a nice day ! I love HIT!

callbarian commented 4 years ago

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

Wolfybox commented 4 years ago

Thanks for sharing your code. I have a little confusion. I attained the index from frames using the shared code, but what about the gt files? I have downloaded dataset directly from UCSD. The gt files are also in frames, but the structure for dataset that the author here showed seems like gt frames are transformed into one matrix file, instead of clips.

I do not know the purpose of video_datasets.py

The gt file of the Ped2 is named "ped2.mat" which is an array-like data of 12 different tuples indicating the starting and ending frame of the anomalous event. The corresponding evaluation part lies in 'scrip_eval_video.py' and 'util/eval.py'. However, the format of the gt file or data doesn't influence the 'script_testing.py' since there are two individual files. And this is how I load the gt file from Ped2:

gt_path = r'F:\dataset\UCSD\ped2\ped2.mat'
gt_list = []
gt_data = sio.loadmat(gt_path)['gt'][0]
for gt_tuple in gt_data:
    gt_tuple = gt_tuple.squeeze()
    start, end = gt_tuple[0], gt_tuple[1]
    gt_list.append((start, end))

To generate ground truth data, I applied following processing which is simple:

for i in range(len(gt_list)):
    start, end = gt_list[i]
    fnum = fnum_list[i]
    y_true = [0] * start + [1] * (end - start) + [0] * (fnum - end)
    y_trues.extend(y_true)

About the 'video_dataset.py' codes, I am afraid I can't explain further since it will be a long story to describe my ideas for data loading.

WYZhang999 commented 4 years ago

Thanks a lot.It's very helpful.

callbarian commented 4 years ago

Thank you for the comment! As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16): print(os.getcwd()) vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing' save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx' if not os.path.exists(save_dir): os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

LiUzHiAn commented 4 years ago

@callbarian

Actually, I think the way you prepared the dataset is more likely to be consistent with the original paper (i.e. the 16-frame-long clip sliding strategy)

Wolfybox commented 4 years ago

Thank you for the comment! As reading the paper, I found out that the index files should be 'frame_number - 15', since the 16frame long clips are made sliding. if there are 180 frames, then the clips will be 165 clips. The center frame of each clips will be evaluated to match the ground truth. That is why the ground truth for the first 8 and the last 7 are excluded. I attained the auc with 86.63%. MemAE on Ped2(Test005 was excluded since the frames were missing)

I have modified the code that you suggested

def gen_frame_index(clip_len=16): print(os.getcwd()) vfolder = os.getcwd() + '/dataset/UCSD_P2_256/testing' save_dir = os.getcwd() + '/dataset/UCSD_P2_256/testing_idx' if not os.path.exists(save_dir): os.makedirs(save_dir)

tqdm = os.listdir(vfolder)
for vname in tqdm:
    vdir = os.path.join(vfolder, vname)
    if(vname ==".DS_Store"):
        continue
    flist = sorted(os.listdir(vdir))
    fnum = len(flist)
    #clip_num = math.ceil(fnum / clip_len)
    fnum_len = len(str(fnum))
    target_dir = os.path.join(save_dir, vname)
    if not os.path.exists(target_dir):
        os.makedirs(target_dir)
    for clip_i in range(fnum-15):
        start_fi = clip_i
        end_fi = start_fi + 16
        #frame_idx = []
        #frame_idx.append(list(range(start_fi,end_fi)))
        clip_list = flist[start_fi: end_fi]
        clip_list = np.array(clip_list)
        #save_numpy = f'{str(clip_i).zfill(fnum_len)}.mat'
        save_numpy = f'{str(clip_i).zfill(fnum_len)}.npy'
        np.save(os.path.join(target_dir, save_numpy), clip_list)

thank you for sharing the code. It was very helpful

I just noticed that it was written in the paper that "the normality of each frame is evaluated by the reconstruction error of the cuboid centering on it. " So I guess the author are referring to an overlapped sliding strategy.

LiUzHiAn commented 4 years ago

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

Wolfybox commented 4 years ago

@Wolfybox

Yep, the 'cuboid centering on it' might give the clues. BTW, have you guys finished the training process?

I wrote a training script yet it only got me an AUC of around 86% on Ped2. BTW, I noticed the author didnt implement cosine similarity when computing the attention weight.

sdjsngs commented 4 years ago

@Wolfybox can you show some train detail? like init learning optimizer totoal_epoch? i am re-implement this paper in this week. since the author write code : gt_labels[8:-7] i suppose he ignore the border frames for each video when eval auc, did you do that too?

lyn1874 commented 4 years ago

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

lyn1874 commented 3 years ago

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ]) I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

can you share training code please

https://github.com/lyn1874/memAE

donggong1 commented 3 years ago

Hi guys, thanks for the discussion and clarification. Specifically, thanks @lyn1874 for the wonderful repo and reproduction. I uploaded an example for dataset preparation and training. Hope that can be helpful.

gdwang08 commented 3 years ago

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ])

I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

abhishekaich27 commented 3 years ago

@gdwang08 @donggong1 @lyn1874 In the testing script for a given video, why don't we compare the score frame-wise? We can always save the reconstruction error and hence the score for every frame. Why would that be incorrect/different? Why do it with the center frame as you have explained in earlier comments?

It would be great if you guys could just define what does "frame-level AUC" means? I was under the impression that we compare each frame's score but that doesn't seem to be the case.

huyi1998 commented 2 years ago

Thanks for the fruitful discussion. I got an accuracy of 94% on UCSDped2 using the pretrained model ckpt, the only difference I had with @Wolfybox 's dataloader is that I simply used torch transformation pipeline frame_trans = transforms.Compose([ transforms.Resize([height, width]), transforms.Grayscale(num_output_channels=1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]), ]) I have also noticed gt_labels[8:-7] in the evaluation file, and I think the reason for doing this is that they assign the averaged reconstruction error in a video clip to the center frame: i.e., if the video clip starts from frame_001.jpg and end with frame_016.jpg, then the averaged reconstruction error is assumed to be the error for the frame frame_008.jpg.

Exactly! Using the pytorch bulit-in transformation pipeline and the pretrained model provided by the author, I could also get 94.1277% on Ped2. Thanks quite much.

the author is lyn1874?