github / codeql

CodeQL: the libraries and queries that power security researchers around the world, as well as code scanning in GitHub Advanced Security
https://codeql.github.com
MIT License
7.73k stars 1.55k forks source link

Information exposure alert on intentional input validation exception #16867

Open slominskir opened 5 months ago

slominskir commented 5 months ago

Discussed in https://github.com/github/codeql/discussions/16845

Originally posted by **slominskir** June 26, 2024 Is it possible to throw an exception on user input validation failure, and use the Exception.getMessage() to pass this onto the user, while allowing CodeQL scan to pass? I'm referring to a Java project. It appears a level of indirection is required such that Exception.getMessage() cannot be used. This appears to be a false positive though. I have a generic "InvalidInputException" that my validation method throws when it finds a user supplied parameter that is invalid. I'm not revealing any stack trace at all, just using the Exception.getMessage() method to carry a message to the user. CodeQL is saying: ``` Information exposure through a stack trace ``` Using Exception.getMessage() to carry a message actually intended for the user isn't even a stack trace. At a minimum this should be filed under something like "Information exposure through an Exception". Seems like user input validation cannot easily use an Exception to perform notification of validation failure. Bug or feature?
jketema commented 5 months ago

Hi @slominskir,

Thanks for opening this issue. As explained on the discussion where you first raised this, resolving false positives is not a current product priority, but we acknowledge the report and will track it internally for future consideration, or if we observe repeated instances of the same problem.

slominskir commented 5 months ago

Thanks for taking the time to look at this. I appreciate this product. The ability inside GitHub to dismiss alerts as false-positives may explain why your product priority for fixing false positives is low. As long as there are ways to silence false positives, assigning low priority to resolving the underlying rules makes sense. However, if the nuisance of false positives becomes too onerous the value of the tool suffers.

jketema commented 5 months ago

However, if the nuisance of false positives becomes too onerous the value of the tool suffers.

We fully agree with this, and we do try to minimise the number of false positives that our tool produces, especially during initial query development. Unfortunately, due to the nature of the tool, false positives cannot be completely avoided, and sometimes - like in your case - we might just have missed a particular coding pattern, for example. because the pattern didn't show up in our internal testing. Note that we do still very much appreciate false positive reports, even if we cannot address them immediately, because in the long term they will help us improve our tooling.