Closed jeff1evesque closed 5 years ago
One attempt is to reindex the values, possibly at increments at 2. However, parts of speech (pos) more closely related to questions can be assigned the largest values (doesn't have to be increment of 2). Similarly, words least related (including stop words) will receive the lowest. Finally, we could smooth out the scale with logarithmic function.
We don't have time to explore the different combination of the pos (positions relative to one another), with different incremental value, then checking if a logarithmic function is suitable. Instead, we'll apply the simplest solution, which will assume most sentences are 20-30 words is length. So, we'll allow 40 words, which means sentences that do not have this limit will be padded with a fixed constant. Sentences with more than 40 words, will be removed from after the 40th word. In a future extension, applying a logarithmic function will help ease the padding constant. However, current estimation at mid 74% suffices.
We need to generalize the implementation of the QuestionXXX dataset. Specifically, we need to be able to handle any xxx number of parts of speech. Additionally, we may consider applying a logarithmic function to our arbitrary assignment of our
penn_scale
:Lastly, we may need to abstract the logic required to generate the above associated classifier into utility functions outside jupyter notebook. However, we can import them back into the notebook for utilization.