kyma-project / test-infra

Test infrastructure for the Kyma project.
https://status.build.kyma-project.io/
Apache License 2.0
38 stars 181 forks source link

Dedicated oci-image-builder task to print built images #11224

Open dekiel opened 2 months ago

dekiel commented 2 months ago

Description Image builder binary sets GitHub output with list of built images. The list is constructed by applying a regex on log lines, fetched for ADO pipeline execution. The regex expect to find image URI in a string which starts with '--image-to-sign'. This string is present in logs because we use an image-builder flag 'image-to-sign'

The oci-image-builder should have dedicated step to produce an entry in logs which will be used for building built images output. Having a dedicated step is an a way to explicitly define the pipeline is providing this data. The purpose of the task will be clear and well understood, making it less possible to accidentally break reusable workflow output.

Using an image-builder flag name as a string to match builds implicit dependency between setting github output and sign only mode in the tool. This dependency is not documented and can be easily overlooked. A change in sign only mode or in oci-image-builder pipeline may cause the image-builder binary will not be able to find expected string.

Acceptance Criteria

KacperMalachowski commented 2 months ago

I think any way of relying on ADO logs is inconvenient. We should find a dedicated format and provide e.g. an artifact in that format that's available through standard API call.

dekiel commented 1 month ago

@KacperMalachowski what is inconvenient in using ADO logs. We always get this logs in the client. Why not use them? Till now this approach was working fine. Why we need to add another component for storing artifacts, calling API and increase a complexity by this?

KacperMalachowski commented 1 month ago

@dekiel Artifact storing can be handled by azure directly, with dedicated function that already exists in ado client package provided by microsoft. Retry mechanism is well implemented on our side as well already.

With logs we need use regex to find section with data for our need, including some mechanism to be sure it's not duplicated etc. With artifact we can simply create file in pipeline, upload it and then download it when we need it. It can also be in any format.