Closed fridex closed 2 years ago
/triage accepted /assign @harshad16 /priority critical-urgent
The issue here might be deadline on workflow level or on graph-sync task level (both have defaults taken). The goal here is to identify if the graph-sync task is killed based on the workflow or task deadline and increase the corresponding deadline.
/sig knowledge-graph
The package extraction for the mentioned image was tested. s2i-pytorch image was successful extract and synced to db of stage cluster: Results from thoth container images https://stage.thoth-station.ninja/api/v1/container-images?page=0&per_page=25&os_name=ubi
Result from successful graph sync run of the package-extract:
It is now also verified on the prod cluster as well: https://khemenu.thoth-station.ninja/api/v1/container-images?page=0&per_page=25
The graph-sync active deadline min is set to 50, and the graph sync for such container images seems to be taking 25mins. https://github.com/thoth-station/thoth-application/blob/5bba76b4ac23f677fc4aa61eb13cd60954ac14db/graph-sync/base/openshift-templates/graph-sync.yaml#L41
As there were no changes made, I think earlier it failed due to an issue on the database side, which seems to be fixed with the updated storage module and database schema. Please let me know if we should look at some other images.
closing this issue as the image is being successfully analysed Thanks everyone for the effort :100:
Describe the bug
Some results of container image analysis results fail to sync. The reason behind it is exceeded deadline as the result of container images computed is pretty big
To Reproduce Steps to reproduce the behavior:
quay.io/thoth-station/s2i-pytorch-notebook:v0.0.2
and sync results to the databaseExpected behavior
Graph sync should succeed.