EHWUSF / HS68_2018_Project_1

0 stars 9 forks source link

A Feature Engineering Tool: Finding unimportant features #10

Open omidkj opened 6 years ago

omidkj commented 6 years ago

The first step in the process of finding and selecting the most useful features in a dataset, is finding the unimportant features and remove them from dataset to increase training speed and model interpretability. For this tool we can develop some of the following methods:

nirveshk commented 6 years ago

You have covered wide range of features, I am quite interested in the first feature you have proposed. Finding a way to figure out the weight of missing values and assessing them right off the bat and taking care of them before analyzing data would make the algorithm quite useful and worthwhile.

omidkj commented 6 years ago

I like the idea of an automated way to find the right threshold for each dataset. However, sometimes there are some features in our datasets that are really important and we don't want to remove even with a lot of missing values. This proposed method only lists these features with the most missing values and after reviewing them and applying the domain knowledge, we can decide if we want to keep or remove them.

rohitchadaram commented 6 years ago

Finding features with most missing values: How do you plan to arrive at certain threshold value ? (0.45). I really like the idea of all the features you plan to implement especially this : Finding any features that have a single unique value.

It's hard to come up with the normalized importance value in my opinion as the importance values vary from negative to positive values and in some cases only positive values. So how do you plan to get the normalized value work for both the cases ?

douglas-yao commented 6 years ago

Doesn't normalization account for negative and positive values, to place all values in the dataset between 0 and 1?

haleyhowe commented 6 years ago

I think this a great idea. Maybe instead of setting a threshold, it would simply output the percentage of missing values and let the user choose from there? Or the user could input a threshold they would like based off of their knowledge of how much their specific data set would value that percentage of missing values. This could be a simple program that would perform all of the output we would need in terms of metrics on the variables. It would simplify the process of exploring each variable.

omidkj commented 6 years ago

@rohitchadaram I believe choosing a certain threshold is the team decision based on the nature of the dataset they're working on. That's why this value is passed as a parameter and not automatically set in the program. As @douglas-yao mentioned the normalized importance value is between 0 and 1.

omidkj commented 6 years ago

@haleyhowe That's exactly what this method is supposed to do: receive a percentage (threshold) and output a list of features with their percentage of missing values that are greater than the threshold we set earlier. Then, we can decide which feature(s) to eliminate and which ones to keep by applying the domain knowledge. Here we can create another method called 'remove_uf' that receives a list of feature names and the dataset and returns a new dataset with removed features.

RoxanneXin commented 6 years ago

I really like your idea, especially "finding highly correlated features". Can we use Spearman method also? Since Pearson is only work for linear relationship b/w variables (predictors).

omidkj commented 5 years ago

@RoxanneXin That's a good plan!