go-gitea / gitea

Git with a cup of tea! Painless self-hosted all-in-one software development service, including Git hosting, code review, team collaboration, package registry and CI/CD
https://gitea.com
MIT License
45.31k stars 5.51k forks source link

[Proposal][Discuss] Gitea Cluster #13791

Open lunny opened 3 years ago

lunny commented 3 years ago

How does a Gitea deployment scale? Gitea cluster should resolve part of it.

Currently when running several Gitea instances which shared database, git storage. There is still something needs to resolve.

comment by @wxiaoguang

6543 commented 3 years ago

on process.Manager creation start hardbeatFunc()

func hardbeatFunc() { for { x.Where(guid=getGUID()).Update(&Hardbeat{beat=unixtime.Now()})

for _, crash := x.Select(guids with timeout && recoverGUID == "") {
  x.Where(crash.guid).Update(recoverGUID = getGUID())
  // make sure no other instance has taken the recover step - and 
  if !x.Exist(guid=crash.guid, recoverGUID=getGUID()) {ret error}

  now reset all tasks with crash.guid

}

sleep 20sec

} }



modules/tasks task will need to be refactor to have an easy interface:
task.Signal(task.CANCEL, guid, pid) <- if guid is not of running instance, send it to the specific one ...
task.Run(t *task)
...
lafriks commented 3 years ago

Some kind of git storage layer would be needed imho (something like gitlab has)

6543 commented 3 years ago

I would fokus on tasks since git data via shared-storage work quite well at the moment

lunny commented 3 years ago

I would fokus on tasks since git data via shared-storage work quite well at the moment

It is but in fact it's expensive. So a distributed git data storage layer still be a necessary feature of Gitea in future.

Codeberg-org commented 3 years ago

I would fokus on tasks since git data via shared-storage work quite well at the moment

+1

Safe distributed/concurrent gitea is surely the highest priority from a user point of view, as off-the-shelf options for distributed SQL databases and distributed file systems are readily available.

6543 commented 3 years ago

Roadmap:

  1. master elec
  2. log & com for processManager com
  3. tasks

master elec

done by DBMS: who get SQL select-update query in first

~7msg types

msg com

some sort of https://nats.io/, https://activemq.apache.org/cross-language-clients, ... over DB, Redis, ... ?

sidenotes

gary-mazz commented 3 years ago

Interesting discussion. I think this started back in 2017 #2959.

There needs to be recognition of 2 cluster use cases: Load Balancing and High Availability (HA) with 2 types of location configurations: local and remote.

The more distant the cluster participant, data shifts from synchronous (near -real-time) to delayed; creating a spectrum of data synchronization quality levels from highly consistent to eventually consistent.

Technologies picked should be able to operate at a distance as well as on local prem without reconfiguration. Secure communications via tunneling and certificate based authentication between nodes should also be considered.

The "tricky part" is figuring out where to put the replication. Since gitea supports multiple databases, and each employs different and incompatible replication mechanisms, a formalized middle-ware layer is likely required to replicate data. The mid-layer replications also allows different db backend configuration (eg postgresql and Mysql) to provide transparent replication.

Replications will need some type lockout strategy for check-in/outs and zips operations during replication activity. The options are: 1) lockout client access until replication activity is completed 2) Lockout replication updates (cache operations) until client activity is complete. 3) Fail client operations if replication updates touches files in client operations 4) Delay/pause client operation until replication activities and status checks are complete. (for remote site failover and load balancing..)

With remote site load balancing, it is possible to have check-in collisions causing inconsistencies. The use cases that cause these conditions: 1) system clocks fall out of sync between servers (both local and remote locations) 2) remote site load balancing loses replication network connection(s) (two headed monster) 3) normal networking and load delays, cause race conditions between servers (occurs both local and remote configurations)

I hope this helps some of your design decisions

PS: don't forget config files change pushes..

lafriks commented 3 years ago

We should probably also need some kind of git repository access layer so that they could be distributed across cluster with local storage

imacks commented 3 years ago

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

viceice commented 1 year ago

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

Do you have some hint to move from nfs to ceph CSI? I like to test out the perf. I already use S3 (minio) for all other Gitea storage.

piamo commented 1 year ago

Just want to contribute my own experience using Gitea for the last couple of years.

Our first attempt was to run dockerized Gitea in kube, with storage back end provided by NFS. We rely on kube healthcheck to restart an unresponsive Gitea instance, which can run on any tainted host managed by kube. This solves the reliability issue somewhat, though there will be a period of unavailability while the container restarts.

Our v2 setup swaps out NFS for ceph CSI in kube. R/W performance improves dramatically. We also use S3 compat layer in ceph to store LFS data.

My most pressing desire for v3 is HA. We can be less ambitious and work on single local cluster first. There can be a dedicated pod for running cron tasks, so Gitea can concentrate on doing git and webserver stuff. We can also use s3 for storage exclusively for its sync capabilities.

Will there be concurrency problem when using Ceph CSI, since there is no file lock protection?

imacks commented 1 year ago

@piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure.

piamo commented 1 year ago

@piamo no. Only a single instance of gitea runs at any one time, so no locking is necessary. The appropriate ceph volume is auto mounted on whichever host the gitea container runs on. So yeah my setup is not HA, just resilient to host failure.

@imacks But if two or more concurrent requests try to change the same repo, lock is still necessary.

harryzcy commented 1 year ago

I think one immediate step for Gitea would be to enable limiting read-only operations and disable cron to somewhat achieve high availability. Many parts can already be deployed in a HA way:

What we need right now is to allow for disabling cron jobs, then Gitea can be deployed in a cluster with ReadWriteMany storage for git objects. To support ReadWriteOnce storage, the files need to be replicated by Gitea instead of the storage provider. Then Gitea must have a read-only mode and those replicas need to pull changes from master instance. In this case, the read-only operations should be identified so that a load balancer can route traffic properly.

After we have done the above step, then we could try to find some leader election protocols so that a replica can be promoted to master if master is down. This would be the second step.

Only after we have done that, we can start to split cron jobs to multiple instances. I think this is more complicated than the first two steps above.

pat-s commented 1 year ago

Just FYI, we have an active WIP for a Gitea-HA setup in the helm-chart going on right now: https://gitea.com/gitea/helm-chart/pulls/437

It is based on Postgres-HA, a RWX file system and redis-cluster. I think that using a RWX solves some part of the leader-election logic WRT to tasks and communication.

The only thing that is a true issue still are the duplicated cron executions. The biggest issue would be that both do the same thing at the exact same moment and crash therefore. I haven't yet tested in in practice though.

Maybe implementing a random offset/sleep could help in the first place to at least ensure proper functionality? Even if all jobs would still be executed redundantly but it would at least allow us to make some initial progress.

lunny commented 1 year ago

There are still some locks in fact need to be refactored except cron, see #22176

wxiaoguang commented 1 year ago
pat-s commented 1 year ago

Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end.

And to test all of them, we need a (functional) HA cluster first to test on.

I can provide a instance for testing if needed. Are you interested @wxiaoguang @lunny? I could also give you access to the k8s namespace so you can explore the pods yourself.

On the other hand I wonder if this could also be set up and tested using the project funds? A terraform setup which destroys everything again after testing is not a big deal. And the helm chart logic for a HA setup is ready.

lunny commented 1 year ago

I think most problems here are obvious from code level. Maybe we can find more when we start testing. LThank you for you idea about the testing infrastructure. When we need those, we can discuss them. But for now, there are so many problems, maybe we should begin from starting some discuss or sending some PRs.

wxiaoguang commented 1 year ago

Idk what the "docker's duplicate insert bug" is here and all the other points are also somewhat unclear in terms of severity. I think we need to check and find out in the end.

Context:

I can provide a instance for testing if needed. Are you interested? I could also give you access to the k8s namespace so you can explore the pods yourself.

I am interested, however, I have a quite long TODO list and many new PRs:

* https://github.com/go-gitea/gitea/issues/created_by/wxiaoguang * https://github.com/go-gitea/gitea/pulls?q=is%3Apr+author%3Awxiaoguang

So I don't think I have the bandwidth at the moment.

prskr commented 1 year ago

I didn't check everything in the code so far but I think something like https://github.com/hibiken/asynq could help with the cron issues?

For the shared repo access I was actually wondering why not trying to abstract that e.g. with a S3 compatible storage and use something like redlock to synchronize the access repositories. I'd even assume concurrent read should be fine? It's only about consistence when writing to a repository (presumably)?

anbraten commented 1 month ago

In #28958 I've started a distributed implementation for the internal notifier. Thereby events such as issue was deleted would be broadcasted across all nodes.