Open petermetz opened 2 weeks ago
@petermetz just from looking at the title, I think it failed due to the github api being rate limited and thus the variable commitMessagesMetadata
seems to be null (thus forEach
not being defined for it). But I will have a deeper look and we will fix it asap. (In no way, the skip shall happen)
CC @zondervancalvez
I ran it again @petermetz and it seems to work fine now. Most likely either a rate limit issue or github api endpoint intermittent issue (But yes, we will update the code so that it doesn't skip anyway) https://github.com/hyperledger-cacti/cacti/actions/runs/11657574657/job/32455943052?pr=3604
@jagpreetsinghsasan If it's intermittently failing due to rate limits then we either have to call it quits on this (which is acceptable to me, at least we tried) or refactor it in a way that it does not depend on the GitHub API (see my earlier idea about using the local git installation to determine the commit metadata).
Reasoning/general thoughts: If the check is not stable/reliable due to rate limits, then it's going to block PRs by mistake (false negatives) and people will get confused about what to do or how to fix it and we'll just spend a lot of time explaining it to them and sending messages back and forth even for simple contributions. As the number of pull requests is increasing with community growth, the problem will get exponentially worse. If we decide to use the GitHub API for other jobs as well, it will be even worse so the bottleneck of the GitHub API, even if right now is an only an occasional problem, longer term it will be worse.
@jagpreetsinghsasan If it's intermittently failing due to rate limits then we either have to call it quits on this (which is acceptable to me, at least we tried) or refactor it in a way that it does not depend on the GitHub API (see my earlier idea about using the local git installation to determine the commit metadata).
Reasoning/general thoughts: If the check is not stable/reliable due to rate limits, then it's going to block PRs by mistake (false negatives) and people will get confused about what to do or how to fix it and we'll just spend a lot of time explaining it to them and sending messages back and forth even for simple contributions. As the number of pull requests is increasing with community growth, the problem will get exponentially worse. If we decide to use the GitHub API for other jobs as well, it will be even worse so the bottleneck of the GitHub API, even if right now is an only an occasional problem, longer term it will be worse.
@petermetz as you said earlier, we are now moving ahead by checking this from the commits locally itself (after the actions/checkout is executed). In this way, we can straightaway remove the dependency on the api. Also as the rate limit is 1000/hr as specified here, it was most likely an intermittent issue from github side (which is also not relevant now as @zondervancalvez is now modifying the code to use the lcoal commits instead)
The new CI skip check job destroyed the CI completely. It crashes by default and then skips all the tests by default which allows merging pull requests without ANY test coverage.
@jagpreetsinghsasan Let's prioritize this as an urgent one please. The easiest and quickest fix I see for now is to just roll back the pull request entirely and put the task back in the backlog for the team.
https://github.com/hyperledger-cacti/cacti/actions/runs/11657574657/job/32455341799?pr=3604