Open pbosch opened 3 years ago
Thanks for the note. I forget to mention in the documentation that the deep learning model will not benefit from SUOD acceleration... The bottleneck of deep learning training is mainly the accessibility to GPUs...Sorry for the confusion.
In that case, what is the best approach to benefit from the acceleration for non deep learning models but being able to combine them with deep learning models? SUOD has that mechanism build in, which is handy. It would also be possible to train each model separately and then use combination. But is there a way to have both?
This is a great point! I think the combination comes from two perspectives. First, you may consider using deep learning models as feature extractors, and then apply the classifical OD models on the extracted latent representations.
Second, you could construct a matrix to hold the outlier scores for combination, and then use the https://github.com/yzhao062/pyod/blob/master/examples/comb_example.py for combination.
Actually, I created another package called combo a few years ago for model combination: https://github.com/yzhao062/combo/blob/master/examples/detector_comb_example.py although I am not sure whether deep learning models are compatible there.
Sorry for the late answer, I didn't get any notification for some reason.
The second option you outlined is what I had in mind. But I think the first option might work better for my current problem. The data is fairly noisy and the prediction probabilities are all over the place.
Considering, for example, an AutoEncoder, what would be the easiest way to extract the latent representations? Would I need to go through the Keras object or is there another way?
Environment: WSL2 with Conda (Python 3.8.10) Error:
How to reproduce: Add AutoEncoder or DeepSVDD to list of detectors in base SUOD example (DeepSVDD(hidden_neurons=[2, 1])).
Just for completeness sake, using AutoEncoder or DeepSVDD alone works perfectly fine.