spacemeshos / api

Protobuf implementation of the Spacemesh API
MIT License
15 stars 9 forks source link

Spacemesh API

CI Status

Protobuf implementation of the Spacemesh API. This repository contains only API design, not implementation. For implementation work, see go-spacemesh. Note that API implementation may lag design.

Design

The API was designed with the following considerations in mind.

Mesh vs. global state

In Spacemesh, "the mesh" refers to data structures that are explicitly stored by all full nodes and are subject to consensus. This consists of transactions, collated into blocks, which in turn are collated into layers. Note that, in addition to transactions, blocks contain metadata such as layer number and signature. The mesh also includes ATXs (activations).

By contrast, "global state" refers to data structures that are calculated implicitly based on mesh data. These data are not explicitly stored anywhere in the mesh. Global state includes account state (balance, counter/nonce value, and, for smart contract accounts, code), transaction receipts, and smart contract event logs. These data need not be stored indefinitely by all full nodes (although they should be stored indefinitely by archive nodes).

The API provides access to both types of data, but they are divided into different API services. For more information on this distinction, see SMIP-0003: Global state data, STF, APIs, as well as the MeshService and the GlobalStateService.

Transactions

Transactions span mesh and global state data. They are submitted to a node, which may or may not admit the transaction to its mempool. If the transaction is admitted to the mempool, it will probably end up being added to a newly-mined block, and that block will be submitted to the mesh in some layer. After that, the layer containing the block will eventually be approved, and then confirmed, by the consensus mechanism. After the layer is approved, the transaction will be run through the STF (state transition function), and if it succeeds, it may update global state.

Since transactions span multiple layers of abstraction, the API exposes transaction data in its own service, TransactionService.

Types of endpoints

Broadly speaking, there are four types of endpoints: simple, command, query, and stream. Each type is described below. Note that in some cases, the same data are exposed through multiple endpoints, e.g., both a query and a stream endpoint.

Services

The Spacemesh API consists of several logical services, each of which contains a set of one or more RPC endpoints. The node operator can enable or disable each service independently using the CLI. The current set of services is as follows:

Each of these services relies on one or more sets of message types, which live in *types.proto files in the same directory as the service definition files.

Intended Usage Pattern

Mesh data processing flow

  1. Client starts a full node with one or more relevant GRPC endpoints enabled
  2. Client subscribes to the streaming GRPC api methods that are of interest
  3. Client calls NodeService.SyncStart() to request that the node start syncing (note that, at present, sync is on by default and this step is unnecessary, but in future, it will be possible to start the node with sync turned off so that the client can subscribe to streams before the sync process begins, ensuring they don't miss any data)
  4. Client processes streaming data it receives from the node
  5. Client monitors node using NodeService.SyncStatusStream() and NodeService.ErrorStream() and handles node critical errors. Return to step 1 as necessary.
  6. Client gracefully shuts down the node by calling NodeService.Shutdown() when it is done processing data.

Development

Versioning

We use standard semantic versioning. Please regularly cut releases against the master branch and increment the version accordingly. Releases are managed at Releases and the current version line is 1.x. Note that this is especially important for downstream code that relies on individual builds, such as the golang build.

Build targets

This repository currently contains builds for two targets: golang and grpc-gateway. Every time a protobuf definition file is changed, you must update the build and include the updated build files with your PR in order to keep everything in sync. You can check this at any time by running make check, and it's also enforced by CI (see below for more information).

Makefile

The repository includes a Makefile that makes it easy to run most regular tasks:

Under the hood, it uses a helpful tool called buf.

Buf

In addition to running make commands, you can also manually use the buf tool to compile the API to an image. First, install buf, then run:

> buf image build -o /dev/null

to test the build. To output the image in json format, run:

> buf image build --exclude-source-info -o -#format=json

Breaking changes detection

buf also supports detection of breaking changes. To do this, first create an image from the current state:

> buf image build -o image.bin

Make a breaking change, then run against this change:

> buf breaking --against-input image.bin

buf will report all breaking changes.

Linting

buf runs several linters. It's pretty strict about things such as naming conventions, to prevent downstream issues in the various languages and framework that rely upon the protobuf definition files. You can run the linter like this:

> buf lint

If there are no issues, this command should have exit code 0 and no output.

For more information on linting, see the style guide. For more information on the difference between the buf tool and the protoc compiler, see Use protoc input instead of the internal compiler.

Continuous integration

This repository has a continuous integration (CI) workflow built on GitHub Actions. In addition to linting and breaking changes detection, it also runs the protoc compiler, since that tends to surface a slightly different set of warnings and errors than buf.

You can use a nifty tool called act to run the CI workflow locally, although it doesn't always play nice with our workflow configuration.