A native Rust library for Delta Lake, with bindings to Python
Python docs
·
Rust docs
·
Report a bug
·
Request a feature
·
Roadmap
Delta Lake is an open-source storage format that runs on top of existing data lakes. Delta Lake is compatible with processing engines like Apache Spark and provides benefits such as ACID transaction guarantees, schema enforcement, and scalable data handling.
The Delta Lake project aims to unlock the power of the Deltalake for as many users and projects as possible by providing native low-level APIs aimed at developers and integrators, as well as a high-level operations API that lets you query, inspect, and operate your Delta Lake with ease.
Source | Downloads | Installation Command | Docs |
---|---|---|---|
PyPi | pip install deltalake |
Docs | |
Crates.io | cargo add deltalake |
Docs |
The deltalake
library aims to adopt patterns from other libraries in data processing,
so getting started should look familiar.
from deltalake import DeltaTable, write_deltalake
import pandas as pd
# write some data into a delta table
df = pd.DataFrame({"id": [1, 2], "value": ["foo", "boo"]})
write_deltalake("./data/delta", df)
# Load data from the delta table
dt = DeltaTable("./data/delta")
df2 = dt.to_pandas()
assert df.equals(df2)
The same table can also be loaded using the core Rust crate:
use deltalake::{open_table, DeltaTableError};
#[tokio::main]
async fn main() -> Result<(), DeltaTableError> {
// open the table written in python
let table = open_table("./data/delta").await?;
// show all active files in the table
let files: Vec<_> = table.get_file_uris()?.collect();
println!("{:?}", files);
Ok(())
}
You can also try Delta Lake docker at DockerHub | Docker Repo
We encourage you to reach out, and are committed to provide a welcoming community.
Libraries and frameworks that interoperate with delta-rs - in alphabetical order.
The following section outlines some core features like supported storage backends and operations that can be performed against tables. The state of implementation of features outlined in the Delta protocol is also tracked.
Storage | Rust | Python | Comment |
---|---|---|---|
Local | |||
S3 - AWS | requires lock for concurrent writes | ||
S3 - MinIO | No lock required when using AmazonS3ConfigKey::ConditionalPut with storage_options = {"conditional_put":"etag"} |
||
S3 - R2 | No lock required when using AmazonS3ConfigKey::ConditionalPut with storage_options = {"conditional_put":"etag"} |
||
Azure Blob | |||
Azure ADLS Gen2 | |||
Microsoft OneLake | |||
Google Cloud Storage | |||
HDFS |
Operation | Rust | Python | Description |
---|---|---|---|
Create | Create a new table | ||
Read | Read data from a table | ||
Vacuum | Remove unused files and log entries | ||
Delete - partitions | Delete a table partition | ||
Delete - predicates | Delete data based on a predicate | ||
Optimize - compaction | Harmonize the size of data file | ||
Optimize - Z-order | Place similar data into the same file | ||
Merge | Merge a target Delta table with source data | ||
FS check | Remove corrupted files from table |
Reader Version | Requirement | Status |
---|---|---|
Version 2 | Column Mapping | |
Version 3 | Table Features (requires reader V7) |