:warning: The team that worked on this project has spun out of Gnosis and continues development on a forked repo (01-04-2022) which is available here https://github.com/cowprotocol/services
This repository contains backend code for Gnosis Protocol V2 written in Rust.
The orderbook
crate provides the http api through which users (usually through a frontend web application) interact with the order book.
Users can add signed orders to the order book and query the state of their orders.
They can also use the API to estimate fee amounts and limit prices before placing their order.
Solvers also interact with the order book by querying a list of open orders that they can attempt to settle.
The api is documented with openapi. A simple example script that uses the API to place random orders can be found in this repo
The order book service itself uses PostgreSQL as a backend to persist orders. In addition to connecting the http api to the database it also checks order validity based on the block time, trade events, erc20 funding and approval so that solvers can query only valid orders.
The solver
crate is responsible for submitting on chain settlements based on the orders it gets from the order book and other liquidity sources like Balancer or Uniswap pools.
It implements a few settlement strategies directly in Rust:
It can can also interact with a more advanced, Gnosis internal, closed source solver which tries to settle all orders using the combinatorial optimization formulations described in Multi-Token Batch Auctions with Uniform Clearing Price
Several pieces of functionality are shared between the order book and the solver. They live in other crates in the cargo workspace.
contract
provides ethcontract-rs based smart contract bindingsmodel
provides the serialization model for orders in the order book apishared
provides other shared functionality between the solver and order bookThe CI runs unit tests, e2e tests, clippy
and cargo fmt
cargo test
cargo test --jobs 1 -- --ignored --test-threads 1 --skip http_solver
Note: Requires postgres database running (see below).
cargo test -p e2e
.
Note: Requires postgres database and local test network with smart contracts deployed (see below).
cargo clippy --all-features --all-targets -- -D warnings
The tests that require postgres connect to the default database of a locally running postgres instance on the default port. There are several ways to set up postgres:
docker run -d -e POSTGRES_HOST_AUTH_METHOD=trust -e POSTGRES_USER=`whoami` -p 5432:5432 docker.io/postgres
sudo systemctl start postgresql.service
sudo -u postgres createuser $USER
sudo -u postgres createdb $USER
mkdir postgres && cd postgres
initdb data # Arbitrary directory that stores the database
# In data/postgresql.conf set unix_socket_directories to the absolute path to an arbitrary existing
# and writable directory that postgres creates a temporary file in.
# Run postgres
postgres -D data
# In another terminal, only for first time setup
createdb -h localhost $USER
At this point the database should be running and reachable. You can test connecting to it with
psql postgresql://localhost/
Finally, we need to apply the schema (set up in the database
folder). Again, this can be done via docker or locally:
docker build --tag gp-v2-migrations -f docker/Dockerfile.migration .
# If you are running postgres in locally, your URL is `localhost` instead of `host.docker.internal`
docker run -ti -e FLYWAY_URL="jdbc:postgresql://host.docker.internal/?user="$USER"&password=" -v $PWD/database/sql:/flyway/sql gp-v2-migrations migrate
In case you run into java.net.UnknownHostException: host.docker.internal
add --add-host=host.docker.internal:host-gateway
right after docker run
.
If you're combining a local postgres installation with docker flyway you have to add to the above --network host
and change host.docker.internal
to localhost
.
flyway -user=$USER -password="" -locations="filesystem:database/sql/" -url=jdbc:postgresql:/// migrate
In order to run the e2e
tests you have to have a testnet running locally.
Due to the RPC calls the services issue Ganache
is incompatible, so we will use hardhat
.
npm install --save-dev hardhat
hardhat.config.js
in the directory you installed hardhat
in with following content:
module.exports = {
networks: {
hardhat: {
initialBaseFeePerGas: 0,
accounts: {
accountsBalance: "1000000000000000000000000"
}
}
}
};
npx hardhat node
Reading the state of the blockchain requires issuing RPC calls to an ethereum node. This can be a testnet you are running locally, some "real" node you have access to or the most convenient thing is to use a third party service like infura to get access to an ethereum node which we recommend.
After you made a free infura account they offer you "endpoints" for the mainnet and different testnets. We will refer those as node-urls
.
Because Gnosis only runs their services on mainnet, rinkeby and gnosis chain you need to select one of those.
Note that the node-url
is sensitive data. The orderbook
and solver
executables allow you to pass it with the --node-url
parameter. This is very convenient for our examples but to minimize the possibility of sharing this information by accident you should consider setting the NODE_URL
environment variable so you don't have to pass the --node-url
argument to the executables.
To avoid confusion during your tests, always double check that the token and account addresses you use actually correspond to the network of the node-url
you are running the executables with.
To see all supported command line arguments run cargo run --bin orderbook -- --help
.
Run an orderbook
on localhost:8080
with:
cargo run --bin orderbook -- \
--skip-trace-api true \
--skip-event-sync \
--node-url <YOUR_NODE_URL>
--skip-event-sync
will skip some work to speed up the initialization process.
--skip-trace-api true
will make the orderbook compatible with more ethereum nodes. If your node supports trace_callMany
you can drop this argument.
To see all supported command line arguments run cargo run --bin solver -- --help
.
Run a solver which is connected to an orderbook
at localhost:8080
with:
cargo run -p solver -- \
--solver-account 0xa6DDBD0dE6B310819b49f680F65871beE85f517e \
--transaction-strategy DryRun \
--node-url <YOUR_NODE_URL>
--transaction-strategy DryRun
will make the solver only print the solution but not submit it on-chain. This command is absolutely safe and will not use any funds.
The solver-account
is responsible for signing transactions. Solutions for settlements need to come from an address the settlement contract trusts in order to make the contract actually consider the solution. If we pass a public address, like we do here, the solver only pretends to be use it for testing purposes. To actually submit transactions on behalf of a solver account you would have to pass a private key of an account the settlement contract trusts instead. Adding your personal solver account is quite involved and requires you to get in touch with the team, so we are using this public solver address for now.
To make things more interesting and see some real orders you can connect the solver
to our real orderbook
service. There are several orderbooks for production and staging environments on different networks. Find the orderbook-url
corresponding to your node-url
which suits your purposes and connect your solver to it with --orderbook-url <URL>
.
Orderbook URL | Network | Environment |
---|---|---|
https://barn.api.cow.fi/mainnet | Mainnet | Staging |
https://api.cow.fi/mainnet | Mainnet | Production |
https://barn.api.cow.fi/rinkeby | Rinkeby | Staging |
https://api.cow.fi/rinkeby | Rinkeby | Production |
https://barn.api.cow.fi/xdai | Gnosis Chain | Staging |
https://api.cow.fi/xdai | Gnosis Chain | Production |
Always make sure that the solver
and the orderbook
it connects to are configured to use the same network.
To conveniently submit orders checkout the CowSwap frontend and point it to your local instance.