Streaming or sequential feeds are used for storage.
Layer 1 - Query and Index
Indexing must be stored as a bloom or Lucene-like index and it must be specified in a Beeson schema (eg index.beeson, schema_model.beeson)
Query must be linkable, eg Alice has a blog id xyz and it has three articles, select first article link CID, fetch and query for date. A new query language is required to traverse, fetch, and patch when the database is queried.
Layer 2 - Computation
There are two or more types of computation, starting we those most common:
Query computation: A simple query is sent to nodes and executed. The execution engine must calculate gas consumption estimation and return the cost before execution. We can think of query computation for the verbs traverse, fetch, and patch.
Batch / Job computation: These are long running operations that requires job schedules to keep track off. They may used nodes with specific hardware or modules (eg MPC, TPM, GPU). To enable this design requirement EIP-3668 and EIP-5559 will be used.
Keeper computation: Keeper jobs are chain-ops equivalent of cron-jobs but for smart contracts.
Required technology components
Beeson support for Streaming feeds.
Query language for Swarm Feeds / Beeson.
Router or exchange that handles queries and responses for link requests.
A new node specification for long-running queries that enables custom hardware setups and that stores results in Swarm.
ZK-Snarks verifier support in Swarm.
A job tooling library that compiles to WASM
Smart contracts where neccessary
A transpiler or converter from IPLD to Beeson schemas.
Swarm DB
A decentralized database running on Swarm with Beeson schemas
Summary
Where we propose a novel way to use
Beeson schemas
to build a database that is able to query, compute and stored data with schemas or models.Guide-level explanation
We take inspiration from
ParkyDB
,FoundationDB
andSQLite vfs
to design a database that can be used as a key-value storage, and also be queryable with a new or existing query language and enables fast and instant computation with the use ofZK-Snarks
proofs.Beeson DB Layers
Layer 0 - Swarm Feeds
Streaming or sequential feeds are used for storage.
Layer 1 - Query and Index
bloom
orLucene
-like index and it must be specified in a Beeson schema (egindex.beeson
,schema_model.beeson
)linkable
, egAlice has a blog id xyz and it has three articles, select first article link CID, fetch and query for date
. A new query language is required totraverse
,fetch
, andpatch
when the database is queried.Layer 2 - Computation
There are two or more types of computation, starting we those most common:
Query computation
: A simple query is sent to nodes and executed. The execution engine must calculate gas consumption estimation and return the cost before execution. We can think of query computation for the verbstraverse
,fetch
, andpatch
.Batch / Job computation
: These are long running operations that requires job schedules to keep track off. They may used nodes with specific hardware or modules (eg MPC, TPM, GPU). To enable this design requirement EIP-3668 and EIP-5559 will be used.Keeper computation
: Keeper jobs are chain-ops equivalent of cron-jobs but for smart contracts.Required technology components
Copyright
Copyright and related rights waived via CC0.
Author
@molekilla (Rogelio Morrell)