thoth-station / core

Using Artificial Intelligence to analyse and recommend Software Stacks for Artificial Intelligence applications.
https://thoth-station.github.io/
GNU General Public License v3.0
28 stars 25 forks source link

[EPIC] [MVP] Improvements to Thoth advises output #434

Open mayaCostantini opened 1 year ago

mayaCostantini commented 1 year ago

Problem statement

As a Python Developer, I would like to have concise information about the quality of my software stack and all its transitive dependencies, so that I get some absolute metrics such as:

Which would be aggregated and compared to metrics for packages present in Thoth's database to provide a global quality metric for a given software stack, eventually given a specific criteria (maintenance, code quality...), in the form of a percentage or score (A, B, C...).

We consider the metrics derived from direct and transitive dependencies to be of the same importance, so there will not be any difference in the weight given to information carried by the two types of dependencies.

Proposal description

  1. create ADR wrt/ implementation of the service as 'a bot' eg GitHub App, Action, ... ?
  2. PoC: Implement an experimental thamos flag on the advise command to give users insights about the maintenance of their packages
  1. Compute metrics for packages present in Thoth's database that will serve as a basis for a global software stack quality score

Taking the example of OSSF Scorecards, we already aggregate this information in prescriptions which are used directly by the adviser. However, the aggregation logic present in prescriptions-refresh-job only updates prescriptions for packages already present in the repository. We could either aggregate Scorecards data for more packages using the OSSF BigQuery dataset or have our own tool that computes Scorecards metrics on a new package release, which could be integrated directly into package-update-job for instance. This would most likely consist in a simple script querying the GitHub API and computing the metrics on the project's last release commit.

  1. Schedule a new job to compute metrics on aggregated data
  1. Implement the global scoring logic

For example, if a software stack is in the 95th percentile of packages with the best development practices (CI/CD, testing...), score it as "A" for this category. Compute a global score from the different category scores.

Additional context

Actionable items If implemented, those improvements will most likely be a way for maintainers of a project to show that they use a trusted software stacks to their users. AFAICS, this would not provide any actionable feedback to developers about their dependencies.

Acceptance Criteria

To define.

sesheta commented 1 year ago

@mayaCostantini: This issue is currently awaiting triage. If a refinement session determines this is a relevant issue, it will accept the issue by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mayaCostantini commented 1 year ago

/sig user-experience /priority important-soon

mayaCostantini commented 1 year ago

One of the requirements for computing software stack quality scores based on OSSF Scorecards would be to have Scorecards data linked to each project latest release instead of the project repository head commit SHA. This feature request has already been proposed on the scorecards project side.

What about helping them implementing this feature and improving the scorecards cronjob directly instead of computing this data on our side?

cc @goern

goern commented 1 year ago

This sounds reasonable.

Nevertheless, we would use the data via big query?

mayaCostantini commented 1 year ago

This sounds reasonable.

Nevertheless, we would use the data via big query?

Yes, but the information will already be computed in the dataset and we will not need to associate the head commit SHA to the release ourselves.