splitgraph / seafowl

Analytical database for data-driven Web applications 🪶
https://seafowl.io
Apache License 2.0
436 stars 12 forks source link
api database datafusion delta-lake delta-rs edge http rust serverless sql visualization

Seafowl

CI Docker Pulls Docker Image Size (latest by date) GitHub all releases GitHub release (latest by date including pre-releases)

Home page | Docs | Benchmarks | Demo | Nightly builds | Download

✨✨✨ Seafowl now proudly powers the EDB Postgres Lakehouse ✨✨✨

Seafowl is an analytical database for modern data-driven Web applications.

Its CDN and HTTP cache-friendly query execution API lets you deliver data to your visualizations, dashboards and notebooks by running SQL straight from the user's browser.

Features

Fast analytics...

Seafowl is built around Apache DataFusion, a fast and extensible query execution framework. It uses Apache Parquet columnar storage, adhering to the Delta Lake protocol, making it perfect for analytical workloads.

For SELECT queries, Seafowl supports a large subset of the PostgreSQL dialect. If there's something missing, you can write a user-defined function for Seafowl in anything that compiles to WebAssembly.

In addition, you can write data to Seafowl by:

...at the edge

Seafowl is designed to be deployed to modern serverless environments. It ships as a single binary, making it simple to run anywhere.

Seafowl's architecture is inspired by modern cloud data warehouses like Snowflake or BigQuery, letting you separate storage and compute. You can store Seafowl data in an object storage like S3 or Minio and scale to zero. Or, you can build a self-contained Docker image with Seafowl and your data, letting you deploy your data to any platform that supports Docker.

Seafowl's query execution API follows HTTP cache semantics. This means you can put Seafowl behind a CDN like Cloudflare or a cache like Varnish and have query results cached and delivered to your users in milliseconds. Even without a cache, you can get the benefits of caching query results in your user's browser.

Quickstart

Start Seafowl:

docker run --rm -p 8080:8080 \
    -e SEAFOWL__FRONTEND__HTTP__WRITE_ACCESS=any \
    splitgraph/seafowl:nightly

Or download it from the releases page and run it without Docker:

SEAFOWL__FRONTEND__HTTP__WRITE_ACCESS=any ./seafowl

Add a Parquet dataset from HTTP:

curl -i -H "Content-Type: application/json" localhost:8080/q -d@- <<EOF
{"query": "CREATE EXTERNAL TABLE tripdata \
STORED AS PARQUET \
LOCATION 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2022-01.parquet';
CREATE TABLE tripdata AS SELECT * FROM staging.tripdata;
"}
EOF

Run a query:

curl -i -H "Content-Type: application/json" localhost:8080/q \
  -d@-<<EOF
{"query": "SELECT
    EXTRACT(hour FROM tpep_dropoff_datetime) AS hour,
    COUNT(*) AS trips,
    SUM(total_amount) AS total_amount,
    AVG(tip_amount / total_amount) AS tip_fraction
  FROM tripdata
  WHERE total_amount != 0
  GROUP BY 1
  ORDER BY 4 DESC"}
EOF

{"hour":21,"trips":109685,"total_amount":2163599.240000029,"tip_fraction":0.12642660660636984}
{"hour":22,"trips":107252,"total_amount":2154126.55000003,"tip_fraction":0.12631676747865359}
{"hour":19,"trips":159241,"total_amount":3054993.040000063,"tip_fraction":0.1252992155287979}
{"hour":18,"trips":183020,"total_amount":3551738.5100000845,"tip_fraction":0.1248666037263193}
{"hour":20,"trips":122613,"total_amount":2402858.8600000343,"tip_fraction":0.12414978866883832}
{"hour":1,"trips":45485,"total_amount":940333.4000000034,"tip_fraction":0.12336981088023881}
...

CLI

Seafowl also provides a CLI to accommodate frictionless prototyping, troubleshooting and testing of the core features:

$ ./seafowl --cli -c /path/to/seafowl.toml
default> CREATE TABLE t
AS VALUES
(1, 'one'),
(2, 'two');
Time: 0.021s
default> SELECT * FROM t;
+---------+---------+
| column1 | column2 |
+---------+---------+
| 1       | one     |
| 2       | two     |
+---------+---------+
Time: 0.009s
default> \d t
+---------------+--------------+------------+-------------+-----------+-------------+
| table_catalog | table_schema | table_name | column_name | data_type | is_nullable |
+---------------+--------------+------------+-------------+-----------+-------------+
| default       | public       | t          | column1     | Int64     | YES         |
| default       | public       | t          | column2     | Utf8      | YES         |
+---------------+--------------+------------+-------------+-----------+-------------+
Time: 0.005s
default> \q
$

It does so by circumventing Seafowl's primary HTTP interface, which involves properly formatted HTTP requests with queries, authentication, as well as dealing with potentially faulty networking setups, and can sometimes be too tedious for a quick manual interactive session.

Documentation

See the documentation for more guides and examples. This includes a longer tutorial, following which you will:

Tests

Please consult the dedicated README for more info on how to run the Seafowl test suite locally.

Pre-built binaries and Docker images

We do not yet provide full build instructions, but we do produce binaries and Docker images as prebuilt artifacts.

Release builds

You can find release binaries on our releases page

Nightly builds

We produce nightly binaries after every merge to main. You can find them in GitHub Actions artifacts (only if you're logged in, see this issue) or via nightly.link:

Docker images

We produce Docker images on every merge to main.

Long-term feature roadmap

There are many features we're planning for Seafowl. Where appropriate, we'll also aim to upstream these changes into DataFusion itself.

Support for JSON functions and storage

We're planning on adding the JSON datatype to Seafowl, as well as a suite of functions to manipulate/access JSON data, similar to the functions supported by PostgreSQL .

PostgreSQL-compatible endpoint

This will make Seafowl queryable by existing BI tools like Metabase/Superset/Looker.