Open CalvinRodo opened 6 years ago
👍 the only one I think might cause hurdles is the "Signed Commits" since that might beyond casual contributions.
@nschonni I agree.
This control is mainly for the comfort of my organization. But I may change it.
http://git.661346.n2.nabble.com/GPG-signing-for-git-commit-td2582986.html#a2583316
That is an interesting read on Linus Torvalds opinion on signing commits.
I think it would be better if it was changed to signing tags.
So something like.
Releases should be tagged and signed by a maintainer. This should ensure the source code is not modified after the fact.
That sounds like a good practice. Although I usually tell people that tags should be immutable, some people sometimes want to delete or force push them.
Maybe GoC X.0 plan will have the next Entrust replacement work as a Git signing key 😜
Yeah that's an issue for sure, the reason for signing is I've had comments from folks in our IT sec teams about the risk of hosting source code on a platform we don't control. There are concerns that a malicious actor could modify our code without our knowledge.
The wording for the peer review seems to harsh. The point of code review is to ensure code quality, not to lay blame for bugs.
In the latest version of the requirements, we're removing the security aspect as there are already existing policy instruments and will try to address/guide through the playbook instead.
Policy instrument (current document) should address the rules whereas the Playbook should provide all the required guidance and clarification for nuances to not create redundancies.
That's the intent so far. Happy to hear suggestions!
@LaurentGoderre good point, the goal isn't to spread blame but spread responsibility, so if someone is gone and there is a question about some code the reviewer should be able to answer it.
Yeah, that makes sense.
And we're actually going back with sections on Security as some of them are not necessarily covering specific topics found in the current discussion.
There should also be a formal process to do periodic reviews just to ensure quality of commits, too. SCA is nice, but having external review by a security assessor or someone trained by one is likely a good idea.
There should also be a formal process to do periodic reviews just to ensure quality of commits, too. SCA is nice, but having external review by a security assessor or someone trained by one is likely a good idea.
@keithdouglas
So the idea would be to review some commits or to review the project itself?
I don't think external reviews are a horrible idea but am curious as to what they would be reviewing for?
Do you have examples of things that they could look for that say an automated process couldn't catch?
@CalvinRodo : Both commits and the over-all stance of the project make sense to me. The idea would be to do little bits so the big picture is easier later.
As for external reviews: psychologists talk about what is called "confirmation bias". This is the very human weakness of only seeking evidence in something's favour rather than looking for disconfirming evidence too. The role of a security assessor in any context the way I see it is to help avoid confirmation bias - and that's why they are at arm's length.
As for examples: depends on the tool chain, but detecting vulnerabilities is an undecidable (in the sense of computability theory) process so having lots of different "ways to think about it" help. Specific examples where the tools tend to do badly are at boundaries between systems or processes, etc., things like weak or broken authentication, possible race conditions, ...
I think periodic audits of code quality by an external party is fine just as long as we don't make it a mandatory part of a release.
I can't think of a better way to slow down something then to bring in someone with little to no knowledge of the business or the context of the work to review every single line of code or commit.
We've thought a lot more since my earlier remarks and are working internally on a more Agile version of the SA&A process. Note that regardless - the security controls for the project etc. is not really up to the dev teams at all - they can certainly go "above and beyond", but by the letter of the law (so to speak) IT security has to decide what is necessary (based on what business needs are present). We're working on a way to have these needs partially determined up front so they can be added to back log items immediately.
@keithdouglas security can't block development practices that are universally recognized in the name of security. ITSec should focus on helping dev team improve security instead of acting as a gate keeper, especially since security is an ongoing effort, not something that only should happen at checkpoints.
@LaurentGoderre, that's precisely why we want a version of the "business needs for security" to be started as soon as an Idea document or any other little piece of governance for the a project is available - thus reducing what is done at checkpoints.
As for "universally recognized", I agree, but provided it is done relative to the security stance of what is being developed. And for that one needs a risk assessment (and a background context) and hence also a security assessor to determine these.
I've been documenting some Security Controls for Open Source Code, figured I'd share it here since it might be useful for people in other organizations.
These are the controls that I'm putting in place for my project and may not be needed for everyone's work.
Signed Commits
Only signed commits can be merged into the master branch.
Peer Code Review
All pull requests must be reviewed by 2 team members (1 Senior member is mandatory). If you review code you are as responsible for that code as the person who wrote that code.
Static Code Analysis
All Code must be run through a static analysis tool. This should run as a precommit hook or on pull requests to master.
Dependency Scanning
All Dependencies should be scanned using a dependency scanning tool either as a precommit/push hook or as webhook on a pull request.
Dependency Scanning tools
snyk.io Free for open source software OWASP Dependency Check Java and .Net only
Secret and Credential Policy
You must store secret keys and credentials separately, for example with a secret management system, and inject them into the code. This protects the keys from being accessed by unauthorised staff, and makes it easy to provision different keys for different environments and rotate keys if needed.3
2 Factor Authentication (2FA)
2FA is mandatory for members of the development team if it is supported by the source control platform.
Security Audits
The project will be routinely audited for any unallowed data, and run through several credential detection tools. Such as GitRob and TruffleHog
Security Disclosure Policy
A security disclosure policy is needed to outline the steps to take if a vulnerability is found in our source code.
Accidental Credential Disclosure Policy
Come up with a procedure to follow when we accidentally disclose a secret of some kind.
Example
Assume key is compromised
Revoke credentials if possible Generate new keys Determine potential impact of disclosure If it's determined that protected B or above information could be disclosed follow your organizations guidelines on an accidental privacy disclosure.
References
Title: 10 GitHub Security Best Practices
Link https://snyk.io/blog/ten-git-hub-security-best-practices/
Title: Making source code open and reusable
Link: https://www.gov.uk/service-manual/technology/making-source-code-open-and-reusable
Title: Security considerations when coding in the open
Link: https://www.gov.uk/government/publications/open-source-guidance/security-considerations-when-coding-in-the-open#deal-with-security-vulnerabilities
Title: Tensorflow SECURITY.md File
Link: https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md
Title Apache Storm SECURITY.md file Link: https://github.com/apache/storm/blob/master/SECURITY.md