Go bindings to the OpenLDAP Lightning Memory-Mapped Database (LMDB).
Functionality is logically divided into several packages. Applications will usually need to import lmdb but may import other packages on an as needed basis.
Packages in the exp/
directory are not stable and may change without warning.
That said, they are generally usable if application dependencies are managed
and pinned by tag/commit.
Developers concerned with package stability should consult the documentation.
import "github.com/bmatsuo/lmdb-go/lmdb"
Core bindings allowing low-level access to LMDB.
import "github.com/bmatsuo/lmdb-go/lmdbscan"
A utility package for scanning database ranges. The API is inspired by bufio.Scanner and the python cursor implementation.
import "github.com/bmatsuo/lmdb-go/exp/lmdbpool"
A utility package which facilitates reuse of lmdb.Txn objects using a sync.Pool. Naively storing lmdb.Txn objects in sync.Pool can be troublesome. And the lmdbpool.TxnPool type has been defined as a complete pooling solution and as reference for applications attempting to write their own pooling implementation.
The lmdbpool package is relatively new. But it has a lot of potential utility. And once the lmdbpool API has been ironed out, and the implementation hardened through use by real applications it can be integrated directly into the lmdb package for more transparent integration. Please test this package and provide feedback to speed this process up.
import "github.com/bmatsuo/lmdb-go/exp/lmdbsync"
An experimental utility package that provides synchronization necessary to change an environment's map size after initialization. The package provides error handlers to automatically manage database size and retry failed transactions.
The lmdbsync package is usable but the implementation of Handlers are unstable and may change in incompatible ways without notice. The use cases of dynamic map sizes and multiprocessing are niche and the package requires much more development driven by practical feedback before the Handler API and the provided implementations can be considered stable.
API inspired by BoltDB with automatic commit/rollback of transactions. The goal of lmdb-go is to provide idiomatic database interactions without compromising the flexibility of the C API.
NOTE: While the lmdb package tries hard to make LMDB as easy to use as possible there are compromises, gotchas, and caveats that application developers must be aware of when relying on LMDB to store their data. All users are encouraged to fully read the documentation so they are aware of these caveats.
Where the lmdb package and its implementation decisions do not meet the needs of application developers in terms of safety or operational use the lmdbsync package has been designed to wrap lmdb and safely fill in additional functionality. Consult the documentation for more information about the lmdbsync package.
The lmdb-go project aims for complete coverage of the LMDB C API (within reason). Some notable features and optimizations that are supported:
Idiomatic subtransactions ("sub-updates") that allow the batching of updates.
Batch IO on databases utilizing the MDB_DUPSORT
and MDB_DUPFIXED
flags.
Reserved writes than can save in memory copies converting/buffering into
[]byte
.
For tracking purposes a list of unsupported features is kept in an issue.
Applications with high performance requirements can opt-in to fast, zero-copy reads at the cost of runtime safety. Zero-copy behavior is specified at the transaction level to reduce instrumentation overhead.
err := lmdb.View(func(txn *lmdb.Txn) error {
// RawRead enables zero-copy behavior with some serious caveats.
// Read the documentation carefully before using.
txn.RawRead = true
val, err := txn.Get(dbi, []byte("largevalue"), 0)
// ...
})
Comprehensive documentation and examples are provided to demonstrate safe usage
of lmdb. In addition to godoc
documentation, implementations of the standand LMDB commands (mdb_stat
, etc)
can be found in the cmd/ directory and some simple experimental
commands can be found in the exp/cmd/ directory. Aside from
providing minor utility these programs are provided as examples of lmdb in
practice.
BoltDB is a quality database with a design similar to LMDB. Both store key-value data in a file and provide ACID transactions. So there are often questions of why to use one database or the other.
Nested databases allow for hierarchical data organization.
Far more databases can be accessed concurrently.
Operating systems that do not support sparse files do not use up excessive space due to a large pre-allocation of file space. The exp/lmdbsync package is intended to resolve this problem with LMDB but it is not ready.
As a pure Go package bolt can be easily cross-compiled using the go
toolchain and GOOS
/GOARCH
variables.
Its simpler design and implementation in pure Go mean it is free of many caveats and gotchas which are present using the lmdb package. For more information about caveats with the lmdb package, consult its documentation.
Keys can contain multiple values using the DupSort flag.
Updates can have sub-updates for atomic batching of changes.
Databases typically remain open for the application lifetime. This limits the number of concurrently accessible databases. But, this minimizes the overhead of database accesses and typically produces cleaner code than an equivalent BoltDB implementation.
Significantly faster than BoltDB. The raw speed of LMDB easily surpasses BoltDB. Additionally, LMDB provides optimizations ranging from safe, feature-specific optimizations to generally unsafe, extremely situational ones. Applications are free to enable any optimizations that fit their data, access, and reliability models.
LMDB allows multiple applications to access a database simultaneously. Updates from concurrent processes are synchronized using a database lock file.
As a C library, applications in any language can interact with LMDB databases. Mission critical Go applications can use a database while Python scripts perform analysis on the side.
There is no dependency on shared libraries. So most users can simply install
using go get
.
go get github.com/bmatsuo/lmdb-go/lmdb
On FreeBSD 10, you must explicitly set CC
(otherwise it will fail with a
cryptic error), for example:
CC=clang go test -v ./...
Building commands and running tests can be done with go
or with make
make bin
make test
make check
make all
On Linux, you can specify the pwritev
build tag to reduce the number of syscalls
required when committing a transaction. In your own package you can then do
go build -tags pwritev .
to enable the optimisation.
The go doc
documentation available on
godoc.org is the primary source
of developer documentation for lmdb-go. It provides an overview of the API
with a lot of usage examples. Where necessary the documentation points out
differences between the semantics of methods and their C counterparts.
The LMDB homepage and mailing list (archives) are the official source of documentation regarding low-level LMDB operation and internals.
Along with an API reference LMDB provides a high-level
summary of the library. While
lmdb-go abstracts many of the thread and transaction details by default the
rest of the guide is still useful to compare with go doc
.
The lmdb-go project makes regular releases with IDs X.Y.Z
. All packages
outside of the exp/
directory are considered stable and adhere to the
guidelines of semantic versioning.
Experimental packages (those packages in exp/
) are not required to adhere to
semantic versioning. However packages specifically declared to merely be
"unstable" can be relied on more for long term use with less concern.
The API of an unstable package may change in subtle ways between minor release versions. But deprecations will be indicated at least one release in advance and all functionality will remain available through some method.
Except where otherwise noted files in the lmdb-go project are licensed under the BSD 3-clause open source license.
The LMDB C source is licensed under the OpenLDAP Public License.
An experimental backend for github.com/hashicorp/raft forked from github.com/hashicorp/raft-mdb.
Experimental backend quad-store for github.com/google/cayley based off of the BoltDB implementation.