OutlierDetectionJL / OutlierDetection.jl

Fast, scalable and flexible Outlier Detection with Julia
https://outlierdetectionjl.github.io/OutlierDetection.jl/dev/
MIT License
79 stars 8 forks source link

Adapting to proposed changes to the way MLJ exports learning networks. #35

Closed ablaom closed 8 months ago

ablaom commented 1 year ago

A continuation of the discussion at https://github.com/JuliaAI/MLJBase.jl/pull/841#issuecomment-1281941079.

I've started to look a bit more closely at the code here. The current use of augmented_transform is making adaption difficult. I feel that the ultimate reason for this is that augmented_transform is trying to mimic an MLJ operation, like transform or predict but is not able to do this faithfully for the following reason. First, all operations in MLJ are defined in the first place on models, and only overloaded for machines later, in a rather generic way. The signature of an operation looks like op(model, fitresult, X), which only includes fitresult from the output of fit(model, ....) - it does not see report. But the way OutlierDetection is currently working, the training scores, which augmented_transform needs, are being placed in the report, so it's not possible to define augmented_transform at the level of models. So in the OutlierDetection code, there is outlier-specific overloading of augmented_transform for machines, which is really a hack, it seems to me. One specific problem is that this includes some logic to distinguish the different types of machine (eg, wrapping an ordinary atomic detector, or some kind of composite). This logic is achieved through dispatch on the type of M in Machine{M}. But using the "banana" changes proposed at MLJBase, M is going to be Symbol all the time, so that the type we want is unknown.

Two possibilities I see - there may well be others - are:

  1. Put training scores of all detectors in fitresult instead of report. (By the way, there is a mechanism now to expose internals of a learning network as an item in the exported composite model fitresult, as we already had for reports. See Example F in the "banana" doc PR.) Define augmented mented_transform at the level of models, and only extend later for machines in a generic way.

  2. Abandon the augmented_transform approach altogether and begin afresh. My feeling it ought to be possible to make this work, but I'm not familiar enough with the code base now to know how much work this would be. Maybe I could develop a proof of concept.

@davnn I wonder if you have a feeling about which approach might be best, or some other idea?

In answer to some earlier queries:

but that doesn't work (maybe because signatures only work with a hard-coded set of operations?

Yes, only the hard-coded operation can be used here.

Otherwise I don't see major hurdles, another small thing is that we defined a fallback for fit to allow usage with unsupervised models, i.e. https://github.com/JuliaAI/MLJModelInterface.jl/blob/90875b72fd92fbde0424ea55b5df1c604cf92c27/src/model_api.jl#L21, where would such a fallback land for prefit?

I shouldn't think you should have to provide any fallback for prefit at all because only fit and update ever call prefit and these already have fallbacks, ensuring only the one prefit ever gets called. But perhaps you are ahead of me on this one and I may be missing something.

davnn commented 1 year ago

Regarding option (1): That's actually how the package started out initially, the fit result was standardized as a struct combining a model and scores. I mainly changed it because the report seemed to be a better fit. It would require some work to revert everything back to this API.

Regarding (2): The library design initially returned a tuple of train and test scores from transform, thus at the beginning transform was augmented_transform and it was one of the main motivations to build this library. augmented_transform is the most flexible way to combine multiple detection models, you always work with the train and test scores.

In conclusion, I would prefer option (1). Regarding prefit, I'll have a look when the changes are implemented.

ablaom commented 1 year ago

Sounds good. Let me know if and when further guidance from me would be helpful.

davnn commented 8 months ago

Fixed compatibility with adapt api for MLJBase v1 compatibility, no more augmented transform.