Open agshruti12 opened 1 month ago
I notice that not all the NER tests pass --- but this is because the feature isn't perfect! Would it be possible to run the feature on the full test dataset in order to get metrics (e.g., precision/recall), but then only run the test on a subset of the NER features that we know are supposed to work? That way, we won't have all the tests return as 'failing' ...
Pull Request Template: If you are merging in a feature or other major change, use this template to check your pull request!
Basic Info
What's this pull request about?
Feature Documentation
Did you document your feature? Make sure you do the following before you pull request!
Code Basics
my_feature
, NOTmyFeature
(camel case).NAME_features.py
, where NAME is the name of my feature.feature_engine/features
.Testing
The location of my tests are here:
If you check all the boxes above, then you ready to merge!