werf / lockgate

Lockgate is a cross-platform distributed locking library for Go. Supports distributed locks backed by Kubernetes or HTTP lock server. Supports conventional OS file locks.
Apache License 2.0
256 stars 15 forks source link
distributed-locks file-lock golang-library http-locker http-locks kubernetes locking-library locks optimistic-locking

Lockgate

Lockgate is a locking library for Go.

This library is used in the werf CI/CD tool to implement synchronization of multiple werf build and deploy processes running from single or multiple hosts using Kubernetes or local file locks.

If you have an Open Source project using lockgate, feel free to list it here via PR.

Installation

go get -u github.com/werf/lockgate

Usage

Select a locker

The main interface of the library which user interacts with is lockgate.Locker. There are multiple implementations of the locker available:

File locker

This is a simple OS filesystem locks based locker. It can be used by multiple processes on the single host filesystem.

Create a file locker as follows:

import "github.com/werf/lockgate"

...

locker, err := lockgate.NewFileLocker("/var/lock/myapp")

All cooperating processes should use the same locks directory.

Kubernetes locker

This locker uses specified Kubernetes resource as a storage for locker data. Multiple processes which use this locker should have an access to the same Kubernetes cluster.

This locker allows distributed locking over multiple hosts.

Create a Kubernetes locker as follows:

import (
    "github.com/werf/lockgate"
    "github.com/werf/lockgate/pkg/distributed_locker"
)

...

// Initialize kubeDynamicClient from https://github.com/kubernetes/client-go.
locker, err := distributed_locker.NewKubernetesLocker(
    kubeDynamicClient, schema.GroupVersionResource{
        Group:    "",
        Version:  "v1",
        Resource: "configmaps",
    }, "mycm", "myns",
)

All cooperating processes should use the same Kubernetes params. In this example, locks data will be stored in the mycm ConfigMap in the myns namespace.

HTTP locker

This locker uses lockgate HTTP server to organize locks and allows distributed locking over multiple hosts.

Create a HTTP locker as follows:

import (
    "github.com/werf/lockgate"
    "github.com/werf/lockgate/pkg/distributed_locker"
)

...

locker := distributed_locker.NewHttpLocker("http://localhost:55589")

All cooperating processes should use the same URL endpoint of the lockgate HTTP lock server. In this example, there should be a lockgate HTTP lock server available at localhost:55589 address. See below how to run such a server.

To ensure fairness for long-held locks, clients can pass a unique "Acquirer Id" along with their request to acquire the lock. Only the client waiting the longest (and who has continued to renew their request for the lock) will be given the lock when it is released or expires.

An example use case is an external lock needed to serialize multi-step deployments illustrated by the deploylock application used with Salesforce orgs.

backend := distributed_locker.NewHttpBackend(serverUrl)
l := distributed_locker.NewDistributedLocker(backend)

acquired, lockHandle, err := l.Acquire(lockName, lockgate.AcquireOptions{
   AcquirerId: uuid.New().String(),
   OnWaitFunc: func(lockName string, doWait func() error) error {
      done := make(chan struct{})
      ticker := time.NewTicker(3 * time.Second)
      defer ticker.Stop()
      go func() {
         for {
            fmt.Fprintf(os.Stderr, "WAITING FOR %s\n", lockName)
            select {
            case <-done:
            case <-ticker.C:
            }
         }
      }()
      defer close(done)
      if err := doWait(); err != nil {
         fmt.Fprintf(os.Stderr, "WAITING FOR %s FAILED: %s\n", lockName, err)
         return err
      } else {
         fmt.Fprintf(os.Stderr, "WAITING FOR %s DONE\n", lockName)
      }
      return nil
   },
})

HoldLease can be used renew a lease previously acquired. This is useful when you want to block while waiting for a lease, then renew the lease in the background.

backend := distributed_locker.NewHttpBackend(serverUrl)
l := distributed_locker.NewDistributedLocker(backend)
l.HoldLease(lockName, uuid)

Lockgate HTTP lock server

Lockgate HTTP server can use memory-storage or kubernetes-storage:

Run a lockgate HTTP lock server as follows:

import (
    "github.com/werf/lockgate"
    "github.com/werf/lockgate/pkg/distributed_locker"
    "github.com/werf/lockgate/pkg/distributed_locker/optimistic_locking_store"
)

...
store := optimistic_locking_store.NewInMemoryStore()
// OR
// store := optimistic_locking_store.NewKubernetesResourceAnnotationsStore(
//  kube.DynamicClient, schema.GroupVersionResource{
//      Group:    "",
//      Version:  "v1",
//      Resource: "configmaps",
//  }, "mycm", "myns",
//)
backend := distributed_locker.NewOptimisticLockingStorageBasedBackend(store)
distributed_locker.RunHttpBackendServer("0.0.0.0", "55589", backend)

Locker usage example

In the following example, a locker object instance is created using one of the ways documented above — user should select the required locker implementation. The rest of the sample uses generic lockgate.Locker interface to acquire and release locks.

import (
    "github.com/werf/lockgate"
    "github.com/werf/lockgate/pkg/distributed_locker"
)

func main() {
    // Create Kubernetes based locker in ns/mynamespace cm/myconfigmap.
    // Initialize kubeDynamicClient using https://github.com/kubernetes/client-go.
        locker := lockgate.NewKubernetesLocker(
                kubeDynamicClient, schema.GroupVersionResource{
                        Group:    "",
                        Version:  "v1",
                        Resource: "configmaps",
                }, "myconfigmap", "mynamespace",
        )

    // OR create file based locker backed by /var/locks/mylocks_service_dir directory
        locker, err := lockgate.NewFileLocker("/var/locks/mylocks_service_dir")
    if err != nil {
        fmt.Fprintf(os.Stderr, "ERROR: failed to create file locker: %s\n", err)
        os.Exit(1)
    }

    // Case 1: simple blocking lock

    acquired, lock, err := locker.Acquire("myresource", lockgate.AcquireOptions{Shared: false, Timeout: 30*time.Second}
    if err != nil {
        fmt.Fprintf(os.Stderr, "ERROR: failed to lock myresource: %s\n", err)
        os.Exit(1)
    }

    // ...

    if err := locker.Release(lock); err != nil {
        fmt.Fprintf(os.Stderr, "ERROR: failed to unlock myresource: %s\n", err)
        os.Exit(1)
    }

    // Case 2: WithAcquire wrapper

    if err := lockgate.WithAcquire(locker, "myresource", lockgate.AcquireOptions{Shared: false, Timeout: 30*time.Second}, func(acquired bool) error {
        // ...
    }); err != nil {
        fmt.Fprintf(os.Stderr, "ERROR: failed to perform an operation with locker myresource: %s\n", err)
        os.Exit(1)
    }

    // Case 3: non-blocking

    acquired, lock, err := locker.Acquire("myresource", lockgate.AcquireOptions{Shared: false, NonBlocking: true})
    if err != nil {
        fmt.Fprintf(os.Stderr, "ERROR: failed to lock myresource: %s\n", err)
        os.Exit(1)
    }

    if acquired {
        // ...

        if err := locker.Release(lock); err != nil {
            fmt.Fprintf(os.Stderr, "ERROR: failed to unlock myresource: %s\n", err)
            os.Exit(1)
        }
    } else {
        // ...
    }
}

Community

Please feel free to reach us via project's Discussions and werf's Telegram group (there's another one in Russian as well).

You're also welcome to follow @werf_io to stay informed about all important news, articles, etc.