jinmang2 / anomaly_detection_on_video

Implementation for Anomaly Detection on Video
4 stars 1 forks source link

Performance Issue #6

Open jinmang2 opened 1 year ago

jinmang2 commented 1 year ago

아래는 1 epoch 결과. 5~7 epoch을 돌리면 AUC는 0.5에서 갇히는 현상이 지속적으로 발생함.

Step:0 AvgLoss=0.8710 StdLoss=0.0000 1stLoss=0.8710 lastLoss=0.8710 minLoss=0.8710 maxLoss=0.8710 AUC=0.5002782708306447 PR=0.05059964626102166
Step:5 AvgLoss=3.9338 StdLoss=2.8243 1stLoss=8.2669 lastLoss=0.7623 minLoss=0.7623 maxLoss=8.2669 AUC=0.807517034579216 PR=0.1607576241294518
Step:10 AvgLoss=1.4919 StdLoss=0.6140 1stLoss=2.2747 lastLoss=0.9958 minLoss=0.5807 maxLoss=2.2747 AUC=0.16762122144976283 PR=0.04338848329704854
Step:15 AvgLoss=1.6446 StdLoss=0.1394 1stLoss=1.4703 lastLoss=1.4876 minLoss=1.4703 maxLoss=1.8131 AUC=0.4684390478231859 PR=0.0674268651188819
Step:20 AvgLoss=0.8132 StdLoss=0.2885 1stLoss=1.3199 lastLoss=0.6997 minLoss=0.5105 maxLoss=1.3199 AUC=0.28932992024636345 PR=0.04891444946420391
Step:25 AvgLoss=1.7628 StdLoss=0.4837 1stLoss=1.0773 lastLoss=2.4392 minLoss=1.0773 maxLoss=2.4392 AUC=0.250310576939738 PR=0.04754468670573986
Step:30 AvgLoss=2.0314 StdLoss=0.6903 1stLoss=2.3869 lastLoss=3.0599 minLoss=0.9971 maxLoss=3.0599 AUC=0.19898115628365431 PR=0.04537879048030299
Step:35 AvgLoss=2.7766 StdLoss=0.4847 1stLoss=1.9832 lastLoss=2.8865 minLoss=1.9832 maxLoss=3.5010 AUC=0.19266569255059698 PR=0.045179576693688266
Step:40 AvgLoss=1.4887 StdLoss=0.6184 1stLoss=2.4235 lastLoss=1.5882 minLoss=0.7377 maxLoss=2.4235 AUC=0.2082122552455719 PR=0.04595771143775035
Step:45 AvgLoss=3.5326 StdLoss=0.6481 1stLoss=2.5429 lastLoss=4.3489 minLoss=2.5429 maxLoss=4.3489 AUC=0.22890342757200227 PR=0.04687964166444252
Step:50 AvgLoss=4.4456 StdLoss=0.2760 1stLoss=4.2121 lastLoss=4.6695 minLoss=4.0198 maxLoss=4.6705 AUC=0.23103430776821504 PR=0.046692345875075564
Step:55 AvgLoss=4.3919 StdLoss=0.4729 1stLoss=5.1393 lastLoss=3.8732 minLoss=3.8732 maxLoss=5.1393 AUC=0.64965516838153 PR=0.1271760848774102
Step:60 AvgLoss=2.3299 StdLoss=0.8748 1stLoss=3.5063 lastLoss=1.0879 minLoss=1.0879 maxLoss=3.5063 AUC=0.24815387917256937 PR=0.04760524419496417
Step:65 AvgLoss=2.0101 StdLoss=0.9458 1stLoss=0.6684 lastLoss=3.1565 minLoss=0.6684 maxLoss=3.1565 AUC=0.23271640500146432 PR=0.046771352912131886
Step:70 AvgLoss=4.9886 StdLoss=0.3729 1stLoss=4.8337 lastLoss=5.5250 minLoss=4.5273 maxLoss=5.5250 AUC=0.22837146520942792 PR=0.0474814242744029
Step:75 AvgLoss=6.7947 StdLoss=2.1007 1stLoss=6.0492 lastLoss=4.8841 minLoss=4.8814 maxLoss=10.4502 AUC=0.8196226162970799 PR=0.2167952561389752
Step:80 AvgLoss=3.6163 StdLoss=1.6280 1stLoss=4.1388 lastLoss=5.8871 minLoss=1.1014 maxLoss=5.8871 AUC=0.20447705842611957 PR=0.045711246652245706
Step:85 AvgLoss=7.3875 StdLoss=0.4127 1stLoss=6.6763 lastLoss=7.8578 minLoss=6.6763 maxLoss=7.8578 AUC=0.20170044504595908 PR=0.04556447331994784
Step:90 AvgLoss=8.3853 StdLoss=0.1805 1stLoss=8.1563 lastLoss=8.5469 minLoss=8.1563 maxLoss=8.5845 AUC=0.20075093413374226 PR=0.04559923416844119
Step:95 AvgLoss=9.1205 StdLoss=0.1703 1stLoss=8.8513 lastLoss=9.3513 minLoss=8.8513 maxLoss=9.3513 AUC=0.8391810918017995 PR=0.20900376961617778

어디서부터 건드려야 할 지 알 수가 없음. 차후 시간이 허락하는대로

jinmang2 commented 1 year ago

learning_rate를 원 논문의 1e-3에서 5e-5로 줄인 이후에는 안정적인 학습이 가능해졌음. 아래는 13 epoch까지의 결과

Epoch:0 AvgLoss=0.5079 StdLoss=0.5622 1stLoss=0.8504 lastLoss=0.0273 minLoss=0.0262 maxLoss=2.3432 AUC=0.8209488342356742 PR=0.17374685188974917
Epoch:1 AvgLoss=0.0688 StdLoss=0.1559 1stLoss=0.0273 lastLoss=0.0122 minLoss=0.0096 maxLoss=0.9602 AUC=0.7506779890089551 PR=0.149105935544754
Epoch:2 AvgLoss=0.0207 StdLoss=0.0273 1stLoss=0.0138 lastLoss=0.0095 minLoss=0.0092 maxLoss=0.2862 AUC=0.7698872776365311 PR=0.20772500215004777
Epoch:3 AvgLoss=0.0044 StdLoss=0.0025 1stLoss=0.0092 lastLoss=0.0016 minLoss=0.0009 maxLoss=0.0099 AUC=0.7942999232738543 PR=0.20682673186153214
Epoch:4 AvgLoss=0.0028 StdLoss=0.0081 1stLoss=0.0007 lastLoss=0.0007 minLoss=0.0003 maxLoss=0.0586 AUC=0.7883740128587733 PR=0.19722470133235492
Epoch:5 AvgLoss=0.0007 StdLoss=0.0006 1stLoss=0.0005 lastLoss=0.0007 minLoss=0.0002 maxLoss=0.0059 AUC=0.7937864068755323 PR=0.21030775337360796
Epoch:6 AvgLoss=0.0055 StdLoss=0.0098 1stLoss=0.0008 lastLoss=0.0027 minLoss=0.0003 maxLoss=0.0520 AUC=0.7926402310865295 PR=0.2100002276501075
Epoch:7 AvgLoss=0.0036 StdLoss=0.0146 1stLoss=0.0021 lastLoss=0.0012 minLoss=0.0003 maxLoss=0.1404 AUC=0.7822288312695057 PR=0.2052078117158595
Epoch:8 AvgLoss=0.0011 StdLoss=0.0040 1stLoss=0.0006 lastLoss=0.0006 minLoss=0.0003 maxLoss=0.0402 AUC=0.7625039288071698 PR=0.1630913048315631
Epoch:9 AvgLoss=0.0009 StdLoss=0.0031 1stLoss=0.0004 lastLoss=0.0004 minLoss=0.0003 maxLoss=0.0320 AUC=0.7549247801599994 PR=0.1530980353393299
Epoch:10 AvgLoss=0.0008 StdLoss=0.0015 1stLoss=0.0003 lastLoss=0.0004 minLoss=0.0003 maxLoss=0.0123 AUC=0.7519393635377 PR=0.14999856338789894
Epoch:11 AvgLoss=0.0006 StdLoss=0.0014 1stLoss=0.0003 lastLoss=0.0003 minLoss=0.0002 maxLoss=0.0124 AUC=0.7509519507553603 PR=0.14901394825739794
Epoch:12 AvgLoss=0.0003 StdLoss=0.0002 1stLoss=0.0010 lastLoss=0.0001 minLoss=0.0001 maxLoss=0.0017 AUC=0.7506210246174259 PR=0.14861824821194214
Epoch:13 AvgLoss=0.0003 StdLoss=0.0013 1stLoss=0.0002 lastLoss=0.0002 minLoss=0.0001 maxLoss=0.0132 AUC=0.7504919395258248 PR=0.14848371734385074
jinmang2 commented 1 year ago
Epoch:7 AUC=0.8073139100366262 PR=0.17819042863041154 
    1stLoss=0.729276 lastLoss=1.744768 minLoss=0.689861 
    AvgLoss=1.752194 StdLoss=0.478671 maxLoss=2.547211

image

논문의 지표가 굉장히 이상함. 위는 AUC 80%지만 anomaly score 예측을 제대로 하지 못한 것을 확인할 수 있음. 이 쪽 분야는 다 이렇게 논문쓰나?

Epoch:11 AUC=0.21349261189907412 PR=0.04524469008957239 
    1stLoss=1.322540 lastLoss=2.434873 minLoss=0.655566 
    AvgLoss=1.243009 StdLoss=0.466013 maxLoss=2.464526

image

아래쪽(normal)이라고 찍어 맞춰서 AUC가 높나? 라고 하기엔 11 epoch은 또 AUC가 많이 낮음.

arrays.zip

이건 ROC, AUC에 대한 추가적인 이해력이 필요할 것으로 보임

jinmang2 commented 1 year ago

RTFM 저자가 제공한 feature(MGFN 저자 또한 동일한 feature를 사용했다)로 학습한 결과, 재현한 결과보다 loss 또한 안정적이며 AUC/PR 또한 꾸준히 오르는 것을 확인했다.

Epoch:7 AUC=0.784771457709642 PR=0.1822431235579869 
    1stLoss=0.227857 lastLoss=0.312515 minLoss=0.064932 
    AvgLoss=0.278438 StdLoss=0.139339 maxLoss=0.683510

image

Epoch:15 AUC=0.8012820684892561 PR=0.22034166273129582 
    1stLoss=0.250363 lastLoss=0.143961 minLoss=0.057977 
    AvgLoss=0.197606 StdLoss=0.102999 maxLoss=0.462289

image

코드에 문제가 있었을지, 혹은 feature를 잘못 뽑았는지, TenCrop 적용이 잘못됐는지에 대한 사실관계 파악이 필요할 것으로 보임.

jinmang2 commented 1 year ago

아래는 저자가 제공한 pretrained model을 사용했을 때의 결과

Evaluation: 100% 290/290 [00:29<00:00, 3.39it/s]
AUC=0.8345831999964156 PR=0.23533653801830107

image

논문에서 언급된 86.98%와는 차이가 있어보임.

https://github.com/carolchenyx/MGFN./issues/27 해당 이슈에서도 언급된 부분.

jinmang2 commented 1 year ago

아래는 1 epoch 결과. 5~7 epoch을 돌리면 AUC는 0.5에서 갇히는 현상이 지속적으로 발생함.

Step:0 AvgLoss=0.8710 StdLoss=0.0000 1stLoss=0.8710 lastLoss=0.8710 minLoss=0.8710 maxLoss=0.8710 AUC=0.5002782708306447 PR=0.05059964626102166
Step:5 AvgLoss=3.9338 StdLoss=2.8243 1stLoss=8.2669 lastLoss=0.7623 minLoss=0.7623 maxLoss=8.2669 AUC=0.807517034579216 PR=0.1607576241294518
Step:10 AvgLoss=1.4919 StdLoss=0.6140 1stLoss=2.2747 lastLoss=0.9958 minLoss=0.5807 maxLoss=2.2747 AUC=0.16762122144976283 PR=0.04338848329704854
Step:15 AvgLoss=1.6446 StdLoss=0.1394 1stLoss=1.4703 lastLoss=1.4876 minLoss=1.4703 maxLoss=1.8131 AUC=0.4684390478231859 PR=0.0674268651188819
Step:20 AvgLoss=0.8132 StdLoss=0.2885 1stLoss=1.3199 lastLoss=0.6997 minLoss=0.5105 maxLoss=1.3199 AUC=0.28932992024636345 PR=0.04891444946420391
Step:25 AvgLoss=1.7628 StdLoss=0.4837 1stLoss=1.0773 lastLoss=2.4392 minLoss=1.0773 maxLoss=2.4392 AUC=0.250310576939738 PR=0.04754468670573986
Step:30 AvgLoss=2.0314 StdLoss=0.6903 1stLoss=2.3869 lastLoss=3.0599 minLoss=0.9971 maxLoss=3.0599 AUC=0.19898115628365431 PR=0.04537879048030299
Step:35 AvgLoss=2.7766 StdLoss=0.4847 1stLoss=1.9832 lastLoss=2.8865 minLoss=1.9832 maxLoss=3.5010 AUC=0.19266569255059698 PR=0.045179576693688266
Step:40 AvgLoss=1.4887 StdLoss=0.6184 1stLoss=2.4235 lastLoss=1.5882 minLoss=0.7377 maxLoss=2.4235 AUC=0.2082122552455719 PR=0.04595771143775035
Step:45 AvgLoss=3.5326 StdLoss=0.6481 1stLoss=2.5429 lastLoss=4.3489 minLoss=2.5429 maxLoss=4.3489 AUC=0.22890342757200227 PR=0.04687964166444252
Step:50 AvgLoss=4.4456 StdLoss=0.2760 1stLoss=4.2121 lastLoss=4.6695 minLoss=4.0198 maxLoss=4.6705 AUC=0.23103430776821504 PR=0.046692345875075564
Step:55 AvgLoss=4.3919 StdLoss=0.4729 1stLoss=5.1393 lastLoss=3.8732 minLoss=3.8732 maxLoss=5.1393 AUC=0.64965516838153 PR=0.1271760848774102
Step:60 AvgLoss=2.3299 StdLoss=0.8748 1stLoss=3.5063 lastLoss=1.0879 minLoss=1.0879 maxLoss=3.5063 AUC=0.24815387917256937 PR=0.04760524419496417
Step:65 AvgLoss=2.0101 StdLoss=0.9458 1stLoss=0.6684 lastLoss=3.1565 minLoss=0.6684 maxLoss=3.1565 AUC=0.23271640500146432 PR=0.046771352912131886
Step:70 AvgLoss=4.9886 StdLoss=0.3729 1stLoss=4.8337 lastLoss=5.5250 minLoss=4.5273 maxLoss=5.5250 AUC=0.22837146520942792 PR=0.0474814242744029
Step:75 AvgLoss=6.7947 StdLoss=2.1007 1stLoss=6.0492 lastLoss=4.8841 minLoss=4.8814 maxLoss=10.4502 AUC=0.8196226162970799 PR=0.2167952561389752
Step:80 AvgLoss=3.6163 StdLoss=1.6280 1stLoss=4.1388 lastLoss=5.8871 minLoss=1.1014 maxLoss=5.8871 AUC=0.20447705842611957 PR=0.045711246652245706
Step:85 AvgLoss=7.3875 StdLoss=0.4127 1stLoss=6.6763 lastLoss=7.8578 minLoss=6.6763 maxLoss=7.8578 AUC=0.20170044504595908 PR=0.04556447331994784
Step:90 AvgLoss=8.3853 StdLoss=0.1805 1stLoss=8.1563 lastLoss=8.5469 minLoss=8.1563 maxLoss=8.5845 AUC=0.20075093413374226 PR=0.04559923416844119
Step:95 AvgLoss=9.1205 StdLoss=0.1703 1stLoss=8.8513 lastLoss=9.3513 minLoss=8.8513 maxLoss=9.3513 AUC=0.8391810918017995 PR=0.20900376961617778
  • feature extractor의 문제인가? (원 논문에서는 563 epoch에서 best result를 얻음)
  • loss가 발산함에도 불구하고 performance는 최상인 이상한 결과들이 있는 것을 확인할 수 있음

    • 아무리 비례관계가 아닐 수 있다고는 하지만 위처럼 fluctuation이 굉장히 심한 것은 이상함
  • model의 출력값은 input이 동일할 때 저자의 모델과 같은 것도 확인함
  • Converge to dead point #5 의 이슈를 해결하여 contrastive loss를 계산할 때 abnormal이 1로 제대로 들어가도록 설정해줬지만 그럼에도 불구하고 prediction value를 전부 0.에 할당해버리는 문제가 동일하게 발생함

    • 마지막 두 value만 1.(abnormal)인 것도 이상하지만 일단보류

어디서부터 건드려야 할 지 알 수가 없음. 차후 시간이 허락하는대로

  • 저자의 feature로 loss와 AUC의 추이 확인
  • 위가 동일하게 비정상적이다 -> 코드 구현 점검
  • 위가 정상적이다 -> feature extractor 점검

image

jinmang2 commented 1 year ago

https://github.com/jinmang2/anomaly_detection_on_video/blob/a4b474694efe6cf9f741afdd32f50ee523315cf8/run.py#L70-L73

feature 상의 문제가 맞음. 위에서 authors의 feature로 진행했음에도 불구하고 위처럼 figure가 뜬 이유는 zero_grad를 생략했기 때문임.

optimizer.zero_grad()

이 당연한 한줄을 빼먹고 training을 진행하니 당연히 에러가 날 수 밖에 없음.

Author's feature

My feature

jinmang2 commented 1 year ago

pytorchvideo feature extractor 문제로 issue 종결. 차후 Tushar-N이 제공하는 (이것도 metaAI의 모델이긴 하다) resnet i3d 모델로 feature를 추출하고 performance 추이를 확인할 때 re-open

jinmang2 commented 1 year ago

colab pro+ 8월 결제 후 Tushar-N I3D로 뽑은 feature로 학습 재개.

Epoch:0 AUC=0.7926582696662146 PR=0.17563552668700286 
    1stLoss=0.956566 lastLoss=0.506874 minLoss=0.279191 
    AvgLoss=0.711341 StdLoss=0.619753 maxLoss=4.904729

image

Epoch:1 AUC=0.8077230496824823 PR=0.18704329242098633 
    1stLoss=0.320969 lastLoss=0.435453 minLoss=0.260262 
    AvgLoss=0.440845 StdLoss=0.115253 maxLoss=0.787687

image

Epoch:2 AUC=0.7936505889319125 PR=0.17534859643224907 
    1stLoss=0.348333 lastLoss=0.532766 minLoss=0.217879 
    AvgLoss=0.374919 StdLoss=0.090575 maxLoss=0.577715

image

Epoch:3 AUC=0.8010720535013867 PR=0.1924331817342682 
    1stLoss=0.483742 lastLoss=0.511116 minLoss=0.199906 
    AvgLoss=0.332311 StdLoss=0.100179 maxLoss=0.636691

image

이전의 model collapse현상 없이 안정적으로 학습하는 것을 확인할 수 있음.

85% 이상 재현에 성공하면 배포하고 Gradio로 hf space에 올리면 개인 사이드 프로젝트 완료될듯?

jinmang2 commented 1 year ago

feature extraction 코드 변경없이 pytorchvideo 모델말고 Tushar-N의 baseline I3D 모델로만 바꿨을 뿐인데 모델 수렴 정도가 확연히 달라짐. 이에 대한 고민은 학습 과정을 확인해야 확실히 알 수 있을 듯