go-gitea / gitea

Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
https://gitea.com
MIT License
44.31k stars 5.43k forks source link

Tag caching to database should be done in async persistent queue #4695

Open ts-toh opened 6 years ago

ts-toh commented 6 years ago

Description

I apologize if this is documented somewhere, I wasn't able to find it.

I really don't need every single tag to be available as a tarball, and some tags aren't even there to indicate releases. And by default Gitea is not able to process a repo if it has too many tags, and even if I up the resource limits, it causes a git push --tags to hang, and can take a very long time to generate releases from tags. So, for that reason, I would love to be able to easily disable processing of tags into releases for a repo.

I have a work repo that has over 1700 tags. I was not able to push the tags to my Gitea repo because Gitea would run out of resources trying to process them. I was eventually able to get it to work by uncommenting LimitMEMLOCK and LimitNOFILE in /etc/systemd/system/gitea.service. However, it took about 20 minutes to generate releases for each tag, after I pushed the tags.

This is what the log showed, and it appeared to be doing this for each tag:

Aug 13 12:45:04 dev-server gitea[21281]: [Macaron] 2018-08-12 12:45:04: Started POST /api/internal/push/update for 127.0.0.1
Aug 13 12:45:05 dev-server gitea[21281]: [Macaron] 2018-08-12 12:45:05: Completed POST /api/internal/push/update 202 Accepted in 794.444925ms

I have several more repos that have even more tags, so I just copied the bare repos into /var/lib/gitea/gitea-repositories/myaccount/, and then copying the Gitea hooks in.

I tried overriding the receive hook through the web interface, but maybe I was not doing it properly.

I saw that there was a post receive hook in the Gitea bare repos that is invoked with:

"/usr/local/bin/gitea" hook --config='/etc/gitea/app.ini' post-receive

I tried digging in the source to find where the post-receive hooks are invoked, but I lack the time and knowledge to do so.

I think that this may be a common enough issue to at least expose this in the gitea.ini file.

Again, I'm sorry if there is an easy fix to this. But I'm sure that there are other users who must be having this issue. And I almost gave up on using Gitea. It was only because I saw in the log file that Gitea had reached resource limits that I thought to increase them with systemd. And even with that, it took a very long time to process each tag.

Perhaps, if not having a way to disable releases, it could be handled asynchronously and not leave a git push hanging, and have Gitea slowly generate releases? I also have noticed that Github does not seem to have this issue.

If there is any easy fix, like creating or disabling a hook, I think it would be good to just have a record of what to do.

Thank you

...

Screenshots

lafriks commented 6 years ago

Yes this should be probably better done async but we would need to do that in some kind of persistent queue

ts-toh commented 6 years ago

@lafriks is there a way that generating releases from tags can be completely disabled for certain repos?

As of right now, repos that have too many tags cannot be pushed at all without making changes to the service file or with sysctl.

I was able to import my larger repos by manually copying them in, and I was also able to generate the releases from the admin menu.

Could someone point me in the right direction about where I can find out how Gitea's internal post receive hook that processes tags is, and how I might disable it?

A user likely would not even be able to get to the point where this is an issue if they have too many tags in their repo.

Should I create a separate issue for this?

lafriks commented 6 years ago

No need for separate issue, hooks can not be disabled as that would break gitea functionality

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs during the next 2 weeks. Thank you for your contributions.

stale[bot] commented 5 years ago

This issue has been automatically closed because of inactivity. You can re-open it if needed.