jtesta / ssh-audit

SSH server & client security auditing (banner, key exchange, encryption, mac, compression, compatibility, security, etc)
MIT License
3.23k stars 165 forks source link

Need for an option dedicated to the presentation of evaluations in terminal #247

Closed Ricky-Tigg closed 3 days ago

Ricky-Tigg commented 4 months ago

Hello. Unlike its command-line counterpart, the web front-end that is on top of the command-line tool provides along with a server/client analyse those strings, here as model (for the sake of readability, one space added between each value and the symbol %; what would make advantageous to remove all spaces between words in messages?).

F Score: 37/100 Host keys: 8 of 16 passing (50 %) Key exchanges: 7 of 11 passing (63 %) Ciphers: 5 of 5 passing (100 %) MAC: 3 of 8 passing (37 %)

The presentation of those four evaluations (in bold) is what brings added value to the analyse if any. Despite this there is no dedicated command option to print them in terminal.

When it comes to the value for score, in this very model it matches the one obtained for the evaluation of MAC, which is the only one that got this value. Only in that context it can be deduced that score is meant to refer to that evaluation. But that can no more be deduced once the value obtained by the evaluation of MAC is also the one obtained by another evaluation. This demonstrates the need of a descriptive mention associated to the score; e.g. Score: 37 % (refers to the evaluation of MAC). Otherwise, the overall score being 23/40 (57.5 %), it is the one that is logically expected to be printed, and not that of the MAC evaluation.

jtesta commented 3 days ago

Users of the web application are a different audience than those of the command-line tool. It made sense to include more of an overall analysis in the web application, since that audience would be less technical. I don't see much value in including the same information in the command-line tool.

As for the scoring, the overall 37/100 is derived using both the number of failures as well as the quality of the failures. Whereas the individual scores for host keys, key exchanges, etc., is only based on number of failures.