Open eroell opened 3 months ago
I wonder whether we can figure out new implementations of his where we don't impute every single value so crazily but maybe a few closely related ones? Like calculate KNN first and then impute a group of values? I know that this is a new imputation method but oh well. Maybe an autoencoder imputation is also of interest? Probably faster to train and use. Would need to look at benchmarks..
This can absolutely be stretched to coming up with and adding more (well-performing) imputation strategies yes!
Or even preparing larger synthetic datasets or ones which are well known in the imputation literature, and comparing different methods (and new ones) for performance, runtime, memory requirement, failure modes...
Not just an interesting notebook, but also fast and convenient benchmark possibility for others
Like the imputation part of the bias notebook but in big, and focused on imputation
MissForest with Extremely Randomized Trees can maybe be parallelized better
Question
Within ehrapy, we have
as multiple imputation (MI) methods so far. MI methods are typically computationally expensive but have been shown by many many benchmarks to have the best imputation performance. However, they are simply too slow for our big datasets on CPU. We don't want to force users to use a GPU.
We should profile these two methods and check for