Closed Matgrb closed 6 months ago
Could be nice to consider the removal based on the expect impact. For example two features correlated at 0.90 could have different correlation with the target. Then select the one with the highest correlation.
@gverbock indeed that would be a nice additional strategy.
It requires passing y to the object, so I would start with the simpler strategies first. However, if we decide to develop it, I will make an issue for that!
@Matgrb not sure what is the point of removing correlated features iteratively? Why not save some computational power are remove all feature above threshold H in one go after you have the correlation matrix?
@gverbock @Matgrb Regarding some more simple inteligence, you can create a feature rank, which measures number of pairs where a given feature has correlation above H and then use this rank as additional elimination rule. Assume u have X1, X2, X3 and X4. X1 is correlated with X4 and X1 is correlated with X2, other features are not correlated among each other. In this case it makes sense to remove X1 and not consider removing X4 or X2.
In regards to categorical features, there is no easy solution:
I would propose to start with a simple implementation with Pearson correlation as described in the initial issue description. I can also add the ranking I mentioned above. If you agree, I can pick it up and make a PR.
The main point for doing this iteratively is the following situation A, B, C, D features Correlations A-B 0.95, A-C 0.9, A-D 0.8, B-C 0.95, B-D 0.8, C-D 0.8 Correlated features above or equal to 0.95 are A and B. Let's say we remove iteratively, then we remove one of them only, and after that, we don't have to remove B, because it is not correlated to other features anymore, and has information that other features are missing. If we start with just removing all correlated features, we lose information, that both A and B had, that C and D were missing. I think doing it iteratively does not cause a lot of performance drop, because you already have precomputed correlated matrix for the entire dataset, and you only remove columns and rows from it, but it gives flexibility in case the user wants to select which feature to remove if there are two correlated ones.
An example of this is: the business prefers to work with features based features that they understand better, so from the 2 correlated ones they would choose, whichever suits them best.
I agree, let's start with Pearson correlation and numeric features. I would propose to also do this iteratively, and for now just select randomly one of the two. Later we can add other ways of selecting. Please make a class in feature_elimination, that follows the API in other classes e.g. ShapRFECV. init, fit, fit_compute, compute, plot. Similarly the parameter names in case they overlap. Also you can add docstrings already
Once you make a PR we can discuss there what other steps are needed e.g.
Ok I see the point, I though you were recomputing the correlation after every step, which got me confused :) Can you assign the issue to me?
If I understand correctly, feature-engine
provides similar functionality.
Do we still find this suggestion important or shall we close it, since it has been inactive for so long?
I would close this issue and refer to feature_engine
for feature elimination based on correlation. I believe it make more sense to suggest change in feature_engine
existing functionality than build a new one for Probatus
This issue is closed as feature engine provides this functionality.
The process of feature elimination could be iterative and follow the schema:
The elimination strategy could be the following:
In the future we could also consider adding:
Example code that i used for similar purpose is below. It needs to be refactored and formulated into the probatus API.
You can make a class in feature_elimination module. This class would look like this: