lf-edge / edge-home-orchestration-go

Home Edge Project in LF Edge - Edge Orchestration for home edge devices to enabling smart home use cases.
https://www.lfedge.org/projects/homeedge/
Apache License 2.0
93 stars 50 forks source link

[Consideration] Principle of adopting analysis tools #199

Closed MoonkiHong closed 2 years ago

MoonkiHong commented 3 years ago

Originally suggested by @mgjeong (https://github.com/lf-edge/edge-home-orchestration-go/pull/193#issuecomment-742469991)

We need to establish the principle in adopting analysis tools for security vulnerability, code quality, and so on, which is consent from all our community developers. One good example is LGTM, which is currently adopted as validating security vulnerabilities in our project.

Once we complete establishing the principle, then we will make a decision to keep our current adopted tools or to take an alternative (including LGTM).

tdrozdovsky commented 3 years ago

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it. I think a good reference point would be the collaboration/use of the practices described at OpenSSF. I also suggest analyzing the checklist taken from this organization:

The following checks are all run against the target project:

Name Description
Security-MD Does the project contain a security policy?
Contributors Does the project have contributors from at least two different organizations?
Frozen-Deps Does the project declare and freeze dependencies?
Signed-Releases Does the project cryptographically sign releases?
Signed-Tags Does the project cryptographically sign release tags?
CI-Tests Does the project run tests in CI, e.g. GitHub Actions, Prow?
Code-Review Does the project require code review before code is merged?
CII-Best-Practices Does the project have a CII Best Practices Badge?
Pull-Requests Does the project use Pull Requests for all code changes?
Fuzzing Does the project use OSS-Fuzz?
SAST Does the project use static code analysis tools, e.g. CodeQL, SonarCloud?
Active Did the project get any commits and releases in last 90 days?

Regarding the supported analysis systems in our project: LGTM (@MoonkiHong added) system has already shown good results and it helps us make more competitive code. CodeQL (@t25kim added) helps track quality during the pull request stage.

@MoonkiHong @t25kim thank you very much for your contribution for improving our project.

tiokim commented 3 years ago

@tdrozdovsky Thank you for the valuable idea.

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it.

I fully agree with your idea of discussing some tools usability after opening an issue. I think it would be nice to apply the tool during the PoC(?) period and see if it should be adopted by checking the feasibility.

tiokim commented 3 years ago

The following is the result of scorecard at OpenSSF.

RESULTS
-------
Active: Pass 10
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 5
Fuzzing: Fail 10
Pull-Requests: Pass 9
SAST: Fail 9
Security-Policy: Fail 10
Signed-Releases: Fail 10
Signed-Tags: Fail 0
MoonkiHong commented 3 years ago

@tdrozdovsky Thank you for the valuable idea.

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it.

I fully agree with your idea of discussing some tools usability after opening an issue. I think it would be nice to apply the tool during the PoC(?) period and see if it should be adopted by checking the feasibility.

@t25kim Fully agree with you. Thank you for your suggestion!!!

The following is the result of scorecard at OpenSSF.

RESULTS
-------
Active: Pass 10
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 5
Fuzzing: Fail 10
Pull-Requests: Pass 9
SAST: Fail 9
Security-Policy: Fail 10
Signed-Releases: Fail 10
Signed-Tags: Fail 0

@t25kim This is fantastic! Thank you for your proactive analysis through scorecard from OpenSSF.

tdrozdovsky commented 3 years ago

The checklist that I offered above is just overview of the implementation of the ScoreCard, an example of the results of which was provided by the @t25kim (thank you). Therefore, I understand that you (@MoonkiHong, @t25kim) support the direction of ScoreCard use in our project. Now let's wait for the opinion of other development participants.

P.S. I hope that if other maintainers support this idea, we will implement it step by step together. Thank you again