Closed hasandemirkiran closed 2 weeks ago
Hello,
yes, currently, only the single-layer variant is implemented, because it results in a simpler interface.
At the moment, the model takes
def __init__(self, model: Callable[[Tensor], Tensor], ...)
I think it would be possible create a variant like this:
def __init__(self, model: List[Callable[[Tensor], Tensor]], ...)
where we treat the callables as a sequence of layers such that the output of each module is fed as input to the next one. Then, we can store all of the intermediate features and build a multi-layer mahalanobis-model based on this.
A multi-layer variant should not be difficult to implement. Do you need this feature?
Hi Konstantin,
Thanks for the quick reply.
I am currently using this in my project for benchmarking. It would be great if you can update accordingly. Please just let me know if you can merge it this week otherwise I will find a workoround and maybe implement myself.
It took longer then anticipated, but I implemented a multi-layer variant in 9a29b082443f2a93caaf3a1e95b25fd6809782b1 that gives pretty good results on cifar 10. However, it does not yet support ODIN pre-processing. Would that be sufficient for you as of now?
In the current implementation, it seems the simplier version implemented - which only consider final features.
Are you planning to include the layer-wise / hidden feature ensemle method as well?
Ref from the paper: "Feature ensemble: To further improve the performance, we consider measuring and combining the confidence scores from not only the final features but also the other low-level features in DNNs."