Closed doudou closed 2 years ago
Could you please explain what you are trying to improve in the end-user experience and/or your use case? I think it's reasonable to choose to always build the head commit but, in the context of the daemon, always building the merge commit sounds more like a bug to me since that might not be available or may not be what the user expects. I also don't understand why you override GitHub's mergeable status... I think you may build an outdated commit if a new push has conflicts with the base branch (in which case mergeable
will be false
and merge_commit_sha
will point to the previous merge commit sha, if any).
You can find explanation about all of this in the two commit messages. If you prefer, I can copy/paste both in the pull request message. But since there are only two, I thought I could leave it this way.
You can find explanation about all of this in the two commit messages
I missed that, sorry. I get it now.
@g-arjones turned out that there were other issues related to merge commits. I pushed significant changes to fix that, and updated the PR message to reflect what happened. Could you please have a look ?
@g-arjones another round. Turns out this whole thing is a rabbit hole.
GitHub's rate limit is ~80 queries per minute. Our daemon does trigger it - not often, but regularly. The new commits add a mergeability cache, re-enable the HTTP cache for the all endpoints except the pull request one, and refactor dependency handling to not do any client query.
The latter bit is not needed now that I reenabled caching, but since I implemented it I'd like to keep it. I think it makes things a lot clearer (and is a small step towards new functionality I was hoping to implement in the future regarding how dependent builds are handled / represented)
I'm going to have a deeper look into this later but I'm not sure about the new handling of pull request dependencies because:
OverridesRetriever
)Plus, with cache enabled I don't think the change has any meaningful impact on performance
Funny, because I'm going for a completely opposite direction.
I want a central repository of all the packages, repositories, pull requests and their dependencies as a graph. The three major features that it would enable are:
To me, this is also a MUCH better design. It's a lot simpler.
And I don't think it's a bad desig with what you want, actually.
@g-arjones if you still do not want the dependency refactoring, I'll just move it to another PR
if you still do not want the dependency refactoring, I'll just move it to another PR
No, I had another look and it's fine. It's just that my goal was to move the dependencies resolution to another plugin and generate overrides during build, which would, among other things, allow testing PRs on the buildconf and testing new packages. Having to fetch pull requests from all packages in that context doesn't make sense in my opinion.
But, in the context of the daemon, I think the new design looks good. Sorry for all the noise.
it "handles cycles in the pull requests dependencies" do
pr0 = PullRequest.new("0", {})
pr1 = PullRequest.new("1", {})
pr2 = PullRequest.new("2", {})
pr0.dependencies = [pr1, pr2]
pr1.dependencies = [pr2]
pr2.dependencies = [pr0]
assert_equal Set[pr1, pr2], pr0.recursive_dependencies.to_set
assert_equal Set[pr0, pr2], pr1.recursive_dependencies.to_set
assert_equal Set[pr0, pr1], pr2.recursive_dependencies.to_set
end
This should pass, right?
This should pass, right?
It does, doesn't it ?
It does, doesn't it ?
No.
??? The tests do pass.... Both locally and here. I don't get it.
??? The tests do pass.... Both locally and here. I don't get it.
I mean "I don't get what you're saying". The failure I did have looked like a very weird GH-action specific failure in atomic_write.
??? The tests do pass.... Both locally and here. I don't get it.
I mean the exact test case that I sent where the PullRequest
URLs are different.
it "handles cycles in the pull requests dependencies" do
pr0 = PullRequest.new("0", {})
pr1 = PullRequest.new("1", {})
pr2 = PullRequest.new("2", {})
pr0.dependencies = [pr1, pr2]
pr1.dependencies = [pr2]
pr2.dependencies = [pr0]
assert_equal Set[pr1, pr2], pr0.recursive_dependencies.to_set
assert_equal Set[pr0, pr2], pr1.recursive_dependencies.to_set
assert_equal Set[pr0, pr1], pr2.recursive_dependencies.to_set
end
Ah .... right ... Did not notice it was different from the actual test case.
Anyways, fixed. Thanks for catching this !
Now I see recursion. I was really puzzled how this thing would work if you were never calling #dependencies
on the dependencies... :+1:
Looks good to me.
After the refactoring for GitLab support, GitHub's support to use merge branches was broken. The issue was that the GH API does not return the "mergeable" flag in its PR list endpoint, only on the per-pull request GET endpoint.
I started fixing this by looking for the merge_commit_sha field that does exist in the listing endpoint. Turns out that this is not enough, as GH does not re-compute the merge commit in some cases. We have to actually hit the pull request GET endpoint to force the update.
At that point, ended up having issues with faraday-cache. The GET endpoint was being cached, which meant that we were not triggering the merge commit computation and were waiting forever for it to be updated. I disabled caching for this endpoint - using a separate octokit client for it. Originally, I disabled for the whole daemon but that would be triggering rate limits regularly.