Closed briankinney closed 6 years ago
Hiya. this makes me kinda nervous. it's reasonable but I'm worried it'll break something.
I understand your concern, but don't you think that test failures would reveal broken behavior? If there are classes of failures that are not covered by the tests then perhaps it is necessary to improve the test coverage.
I'm not sure if there is a more cautious approach to fixing this bug but I think fixing it at this level is "right". At a high level I feel that it is wrong to apply different logic to attributes and properties because the conceptual difference between them is so small.
Well, It's unclear to me how much of the tool we are really testing with the unit tests we have. :( Unfortunately this is no longer my focus and so launching on a huge set of unit tests is not one of my priorities. Isn't there a way for you to set your own error listener?
It's been a while since I have looked at this so I'm not sure. If I find the time I can look into passing in a custom error listener. That opens up a very complicated interface and I'm not sure that is the best idea (good luck documenting it). What about introducing some concept of strictness in validation where you can set a configuration parameter that informs the error listener whether to validate existence of properties and attributes?
There is an existing mechanism for you to pass in a listener isn't there? in other words it is already exposed and has the intention of letting you change the way errors are handled. See STErrorListener
interface.
For #178
I don't love this pattern and would prefer something in the spirit of log levels. I'm not sure at this time how difficult that would be to implement but it looks like it might be a huge pain.