Closed bhavikapanara closed 4 years ago
Hi,
Yes this is a case of failure of the model. It will probably hard to fix and would require a model for anaphora resolution which is out of the scope of this repo sadly.
Good luck, Louis
Thanks, @louismartin for the quick response.
After, reading your paper, I came to know that there are four parameters you need to adjust. I have also played with this but, nothing came out of it.
# Load best model
best_model_dir = prepare_models()
recommended_preprocessors_kwargs = {
'LengthRatioPreprocessor': {'target_ratio': 0.90},
'LevenshteinPreprocessor': {'target_ratio': 0.75},
'WordRankRatioPreprocessor': {'target_ratio': 0.75},
'DependencyTreeDepthRatioPreprocessor':{'target_ratio':0.75},
'SentencePiecePreprocessor': {'vocab_size': 10000},
}
Any suggestion on this.
Hi, The control parameters will sadly not fix this problem, which is more due to what the neural network has learned during training and that can't really be changed.
The model is replacing proper noun to the pronoun. For example,
Here is the input statement: oxygen is a chemical element with symbol o and atomic number 8 .
The output is: It has the chemical symbol o . It has the atomic number 8 .
The expected output should be like this: oxygen is a chemical element with symbol o or oxygen has the chemical symbol o oxygen has the atomic number 8
How to stop such replacement? Can anyone please help me to figure out this problem.
Thanks, Bhavika