Thank you for sharing,
I have noticed that in SO-GAAL.py,
you use the same data in the detection part as in the training part,
shouldn't we split the data to train and test sets first?
and how to generate reasonable reference data (outlier)?
also, I found that the data for experiments are generated by the authors,
I wonder if this method could produce good results in public datasets like the MVTEC anomaly dataset?
Thank you for sharing, I have noticed that in SO-GAAL.py, you use the same data in the detection part as in the training part, shouldn't we split the data to train and test sets first? and how to generate reasonable reference data (outlier)? also, I found that the data for experiments are generated by the authors, I wonder if this method could produce good results in public datasets like the MVTEC anomaly dataset?