HHS / OPRE-OPS

ACF's OPRE OPS product. Code name Unicorn.
Other
12 stars 3 forks source link

Decide on code analysis tools #35

Closed carjug closed 3 years ago

carjug commented 3 years ago

We want to decide on code analysis tools that we can run on our code prior to merging and/or run periodically. These tools would be automated to run when PR is created and merged.

Type of Check Alex Carly Amy
Linter Python: Flake8 Python: Flake8 Python: Flake8 JS: eslint?
Test Coverage Code Climate Quality? No opinion CodeCov
amymok commented 3 years ago

@carjug @alexsoble I put a table here so we can put our thoughts in. Feel free to add more if you think there are more we need that I didn't put here.

amymok commented 3 years ago

Based on TTS recommendations from Tools and Project Setup

alexsoble commented 3 years ago

@amymok Thanks so much for getting this started!

alexsoble commented 3 years ago

Put some initial thoughts in the table, but none of these are too strongly held. Also added separate rows for "Security (dependency scanning)" and "Security (static analysis)" since these can be different tools.

Notes:

Test Coverage + Static Analysis

I think both static analysis and test coverage features fall under Code Climate Quality, which is free for open source: https://codeclimate.com/quality/pricing/

I don't know firsthand if Code Climate Quality has a lot of strength in Python in particular, would want to check on that. Also would want to double-check the compliance on it, but I do see other 18F/TTS projects using it and it's recommended under Tools.

Security (dependency scanning)

carjug commented 3 years ago

Here's a link to OWASP Zap if we're interested in exploring for security automation: https://www.zaproxy.org/

alexsoble commented 3 years ago

Oooooooh yes good catch @carjug! OWASP Zap!

alexsoble commented 3 years ago

I would kind of put it in https://github.com/18F/OPRE-Unicorn/issues/36 though, "Decide on security analysis tools". The dependency scanning decision should probably go there too.

Static code analysis for security is both code analysis and security analysis, so could go in either category/Issue.

ninamak commented 3 years ago

Will wrap up during Eng Co-work 7/14

alexsoble commented 3 years ago

I propose moving these two rows over to Issue #36, Decide on security analysis tools — does that sound right @carjug @amymok?

amymok commented 3 years ago

Yes that sounds right! 👍

alexsoble commented 3 years ago

Moved over the dependency scanning and static analysis rows to #36, so this is smaller now -- just linting and code coverage tools.

carjug commented 3 years ago

Decision: backend linter Flake8 Will delay FE linting decision for when we actually implement JS on the product.

Test coverage: we're going to do some Slack digging about who has used what tools on their projects and if they have examples we can look at.

note: let's use as few tools as necessary, and only add them when they're useful to us :) so as not to overcomplicate our builds/tests from the start.

alexsoble commented 3 years ago

I did some digging around on this on Slack and learned useful things! Even led to an Engineering Practices Guide update PR! https://github.com/18F/development-guide/pull/273 — CodeCov is out, CodeQL is in ...

alexsoble commented 3 years ago

tl;dr Since we want to move development over to our partners' infra as soon as possible, if we want to use a third-party tool to visualize things like test coverage, we'd need to make sure it's approved/allowed by our partners' security compliance teams.

Code Climate Quality has a GSA ATO, but that doesn't mean it would be accepted at another agency.

Another option would be to not visualize our test coverage metrics at all, we could just have a plain text results for test coverage like we'll have for code linting. That would meet @carjug's "few tools as necessary" preference, which I think is a really good one!

amymok commented 3 years ago

Does that mean this is ready for ADR writing time? :)

alexsoble commented 3 years ago

@amymok Hmmmmm, what would that ADR say?

My personal thought at this moment is that we should not choose or use any third-party external services to help with code analysis. It seems like we have consensus on Flake8 for Python, and we could use something like Coverage.py to measure code coverage: https://coverage.readthedocs.io/en/coverage-5.5/.

We could write an ADR for Flake8 and Coverage.py. But I think choosing specific Python libraries/linters might not rise to the level of writing an ADR, these aren't big-picture decisions that influence our overall system architecture.

amymok commented 3 years ago

Oh okay, I may have misunderstood the conversation earlier, I thought we are grouping them to write an ADR. Okay that makes sense, right, they are not the overall system architecture. Thanks for the clarification.

alexsoble commented 3 years ago

@amymok No worries! Yeah, I could go either way on this one but I think we should save ADRs for big language // framework // database decisions, or anything that involves using a third-party external service or data source.

carjug commented 3 years ago

So given that the decision has been made and agreed upon, what do we think about moving this to done @amymok @alexsoble

alexsoble commented 3 years ago

I'm good with that @carjug!

amymok commented 3 years ago

Sounds good to me! Moving it~