Closed carjug closed 3 years ago
@carjug @alexsoble I put a table here so we can put our thoughts in. Feel free to add more if you think there are more we need that I didn't put here.
Based on TTS recommendations from Tools and Project Setup
@amymok Thanks so much for getting this started!
Put some initial thoughts in the table, but none of these are too strongly held. Also added separate rows for "Security (dependency scanning)" and "Security (static analysis)" since these can be different tools.
Notes:
Test Coverage + Static Analysis
I think both static analysis and test coverage features fall under Code Climate Quality, which is free for open source: https://codeclimate.com/quality/pricing/
I don't know firsthand if Code Climate Quality has a lot of strength in Python in particular, would want to check on that. Also would want to double-check the compliance on it, but I do see other 18F/TTS projects using it and it's recommended under Tools.
Security (dependency scanning)
Here's a link to OWASP Zap if we're interested in exploring for security automation: https://www.zaproxy.org/
Oooooooh yes good catch @carjug! OWASP Zap!
I would kind of put it in https://github.com/18F/OPRE-Unicorn/issues/36 though, "Decide on security analysis tools". The dependency scanning decision should probably go there too.
Static code analysis for security is both code analysis and security analysis, so could go in either category/Issue.
Will wrap up during Eng Co-work 7/14
I propose moving these two rows over to Issue #36, Decide on security analysis tools — does that sound right @carjug @amymok?
Yes that sounds right! 👍
Moved over the dependency scanning and static analysis rows to #36, so this is smaller now -- just linting and code coverage tools.
Decision: backend linter Flake8 Will delay FE linting decision for when we actually implement JS on the product.
Test coverage: we're going to do some Slack digging about who has used what tools on their projects and if they have examples we can look at.
note: let's use as few tools as necessary, and only add them when they're useful to us :) so as not to overcomplicate our builds/tests from the start.
I did some digging around on this on Slack and learned useful things! Even led to an Engineering Practices Guide update PR! https://github.com/18F/development-guide/pull/273 — CodeCov is out, CodeQL is in ...
tl;dr Since we want to move development over to our partners' infra as soon as possible, if we want to use a third-party tool to visualize things like test coverage, we'd need to make sure it's approved/allowed by our partners' security compliance teams.
Code Climate Quality has a GSA ATO, but that doesn't mean it would be accepted at another agency.
Another option would be to not visualize our test coverage metrics at all, we could just have a plain text results for test coverage like we'll have for code linting. That would meet @carjug's "few tools as necessary" preference, which I think is a really good one!
Does that mean this is ready for ADR writing time? :)
@amymok Hmmmmm, what would that ADR say?
My personal thought at this moment is that we should not choose or use any third-party external services to help with code analysis. It seems like we have consensus on Flake8 for Python, and we could use something like Coverage.py
to measure code coverage: https://coverage.readthedocs.io/en/coverage-5.5/.
We could write an ADR for Flake8 and Coverage.py. But I think choosing specific Python libraries/linters might not rise to the level of writing an ADR, these aren't big-picture decisions that influence our overall system architecture.
Oh okay, I may have misunderstood the conversation earlier, I thought we are grouping them to write an ADR. Okay that makes sense, right, they are not the overall system architecture. Thanks for the clarification.
@amymok No worries! Yeah, I could go either way on this one but I think we should save ADRs for big language // framework // database decisions, or anything that involves using a third-party external service or data source.
So given that the decision has been made and agreed upon, what do we think about moving this to done @amymok @alexsoble
I'm good with that @carjug!
Sounds good to me! Moving it~
We want to decide on code analysis tools that we can run on our code prior to merging and/or run periodically. These tools would be automated to run when PR is created and merged.