This repository is for the audit competition for the SafeStaking-by-HOPR. To participate, submit your findings only by using the on-chain submission process on https://app.hats.finance/vulnerability .
We look forward to seeing your findings.
A project by the HOPR Association
HOPR is a privacy-preserving messaging protocol which enables the creation of a secure communication network via relay nodes powered by economic incentives using digital tokens.
A good place to start is the Getting Started guide on YouTube which walks through the following instructions using GitPod.
The following instructions show how the latest community release may be installed. The instructions should be adapted if you want to use the latest development release or any other older release.
The preferred way of installation should be via Docker.
All our docker images can be found in our Google Cloud Container Registry.
Each image is prefixed with gcr.io/hoprassociation/$PROJECT:$RELEASE
.
The latest
tag represents the master
branch, while the providence
tag
represents the most recent stable release/*
branch.
You can pull the Docker image like so:
docker pull gcr.io/hoprassociation/hoprd:providence
For ease of use you can set up a shell alias to run the latest release as a docker container:
alias hoprd='docker run --pull always -m 2g -ti -v ${HOPRD_DATA_DIR:-$HOME/.hoprd-db}:/app/db -p 9091:9091/tcp -p 9091:9091/udp -p 3001:3001 gcr.io/hoprassociation/hoprd:providence'
IMPORTANT: Using the above command will map the database folder used by hoprd to a local folder called .hoprd-db
in your home directory. You can customize the location of that folder further by executing the following command:
HOPRD_DATA_DIR=${HOME}/.hoprd-better-db-folder eval hoprd
Also all ports are mapped to your localhost, assuming you stick to the default port numbers.
NOTE: This setup should only be used for development or if you know what you
are doing and don't need further support. Otherwise you should use the npm
or docker
setup.
You will need to clone and initialize the hoprnet
repo first:
git clone https://github.com/hoprnet/hoprnet
cd hoprnet
make init
If you have direnv set up properly your nix-shell
will be
configured automatically upon entering the hoprnet
directory and enabling it
via direnv allow
. Otherwise you must enter the nix-shell
manually:
nix develop
Now you may follow the instructions in Develop.
Alternatively you may use a development Docker container which uses the same Nix setup.
make run-docker-dev
The hoprd
provides various command-line switches to configure its behaviour. For reference these are documented here as well:
$ hoprd --help
Options:
--network <NETWORK>
Network id which the node shall run on [env: HOPRD_NETWORK=] [possible values: anvil-localhost, rotsee, debug-staging, anvil-localhost2, monte_rosa]
--identity <identity>
The path to the identity file [env: HOPRD_IDENTITY=] [default: <IDENTITY_DIR>]
--data <data>
manually specify the data directory to use [env: HOPRD_DATA=] [default: <DATA_DIR>]
--host <HOST>
Host to listen on for P2P connections [env: HOPRD_HOST=] [default: 0.0.0.0:9091]
--announce
Run as a Public Relay Node (PRN) [env: HOPRD_ANNOUNCE=]
--api
Expose the API on localhost:3001 [env: HOPRD_API=]
--apiHost <HOST>
Set host IP to which the API server will bind [env: HOPRD_API_HOST=] [default: localhost]
--apiPort <PORT>
Set port to which the API server will bind [env: HOPRD_API_PORT=] [default: 3001]
--apiToken <TOKEN>
A REST API token and for user authentication [env: HOPRD_API_TOKEN=]
--healthCheck
Run a health check end point on localhost:8080 [env: HOPRD_HEALTH_CHECK=]
--healthCheckHost <HOST>
Updates the host for the healthcheck server [env: HOPRD_HEALTH_CHECK_HOST=] [default: localhost]
--healthCheckPort <PORT>
Updates the port for the healthcheck server [env: HOPRD_HEALTH_CHECK_PORT=] [default: 8080]
--password <PASSWORD>
A password to encrypt your keys [env: HOPRD_PASSWORD=]
--defaultStrategy <DEFAULT_STRATEGY>
Default channel strategy to use after node starts up [env: HOPRD_DEFAULT_STRATEGY=] [default: passive] [possible values: promiscuous, passive, random]
--maxAutoChannels <MAX_AUTO_CHANNELS>
Maximum number of channel a strategy can open. If not specified, square root of number of available peers is used. [env: HOPRD_MAX_AUTO_CHANNELS=]
--disableTicketAutoRedeem
Disables automatic redeemeing of winning tickets. [env: HOPRD_DISABLE_AUTO_REDEEEM_TICKETS]
--disableUnrealizedBalanceCheck
Disables checking of unrealized balance before validating unacknowledged tickets. [env: HOPRD_DISABLE_UNREALIZED_BALANCE_CHECK]
--provider <PROVIDER>
A custom RPC provider to be used for the node to connect to blockchain [env: HOPRD_PROVIDER=]
--dryRun
List all the options used to run the HOPR node, but quit instead of starting [env: HOPRD_DRY_RUN=]
--init
initialize a database if it doesn't already exist [env: HOPRD_INIT=]
--forceInit
initialize a database, even if it already exists [env: HOPRD_FORCE_INIT=]
--testAnnounceLocalAddresses
For testing local testnets. Announce local addresses [env: HOPRD_TEST_ANNOUNCE_LOCAL_ADDRESSES=]
--heartbeatInterval <MILLISECONDS>
Interval in milliseconds in which the availability of other nodes get measured [env: HOPRD_HEARTBEAT_INTERVAL=] [default: 60000]
--heartbeatThreshold <MILLISECONDS>
Timeframe in milliseconds after which a heartbeat to another peer is performed, if it hasn't been seen since [env: HOPRD_HEARTBEAT_THRESHOLD=] [default: 60000]
--heartbeatVariance <MILLISECONDS>
Upper bound for variance applied to heartbeat interval in milliseconds [env: HOPRD_HEARTBEAT_VARIANCE=] [default: 2000]
--onChainConfirmations <CONFIRMATIONS>
Number of confirmations required for on-chain transactions [env: HOPRD_ON_CHAIN_CONFIRMATIONS=] [default: 8]
--networkQualityThreshold <THRESHOLD>
Miniumum quality of a peer connection to be considered usable [env: HOPRD_NETWORK_QUALITY_THRESHOLD=] [default: 0.5]
--safeAddress <HOPRD_SAFE_ADDRESS>
The Safe instance for a node where its HOPR tokens are held [env: HOPRD_SAFE_ADDRESS=]
--moduleAddress <HOPRD_MODULE_ADDRESS>
The node management module instance that manages node permission to assets held in safe [env: HOPRD_MODULE_ADDRESS=]
-h, --help
Print help
-V, --version
Print version
All CLI options can be configured through environment variables as well. CLI parameters have precedence over environment variables.
As you might have noticed running the node without any command-line argument might not work depending on the installation method used. Here are examples to run a node with some safe configurations set.
The following command assumes you've setup an alias like described in Install via Docker.
hoprd --identity /app/hoprd-db/.hopr-identity --password switzerland --init --announce --host "0.0.0.0:9091" --apiToken <MY_TOKEN> --network monte_rosa
Here is a short breakdown of each argument.
hoprd
--identity /app/hoprd-db/.hopr-identity # store your node identity information in the persisted database folder
--password switzerland # set the encryption password for your identity
--init # initialize the database and identity if not present
--announce # announce the node to other nodes in the network and act as relay if publicly reachable
--host "0.0.0.0:9091" # set IP and port of the P2P API to the container's external IP so it can be reached on your host
--apiToken <MY_TOKEN> # specify password for accessing REST API(REQUIRED)
--network monte_rosa # an network is defined as a chain plus a number of deployed smart contract addresses to use on that chain
# each release has a default network id set, but the user can override this value
# nodes from different networks are **not able** to communicate
There is an optional Docker Compose setup that can be used to run the above Docker image with HOPRd and also have an extended monitoring of the HOPR node's activity (using Prometheus + Grafana dashboard).
To startup a HOPRd node with monitoring, you can use the following command:
docker compose --file scripts/compose/docker-compose.yml up -d
The configuration of the HOPRd node can be changed in the scripts/compose/default.env
file.
Once the configuration starts up, the HOPRd Admin UI is accessible as usual via localhost:3000
. The Grafana instance is
accessible via localhost:3030
and is provisioned with a dashboard that contains useful metrics and information
about the HOPR network as perceived from your node plus some additional runtime information.
The default username for Grafana is admin
with password hopr
.
Currently, to be able to participate in a public testnet or public staging environment, you need to satisfy certain criteria to be eligible to join. See Network Registry for details.
These criteria however, are not required when you develop using your local nodes or a locally running cluster (see Develop section below).
At the moment we DO NOT HAVE backward compatibility between releases. We attempt to provide instructions on how to migrate your tokens between releases.
passive
.info
and take note of the network name..hopr-identity
folder.HOPRd
instance using latest release, observe the account address.HOPRd
instance if HOPRd
operates on the same network as last release, you can compare the two networks using info
.HOPR contains modules written in Rust, therefore a Rust toolchain is needed to successfully build the artifacts. To install Rust toolchain (at least version 1.60) please follow instructions at https://www.rust-lang.org/tools/install first.
# build deps and HOPRd code
make -j deps && make -j build
# starting network
make run-anvil
# update protocol-config
scripts/update-protocol-config.sh -n anvil-localhost
# running normal node alice (separate terminal)
DEBUG="hopr*" yarn run:hoprd:alice
# running normal node bob (separate terminal)
DEBUG="hopr*" yarn run:hoprd:bob
# fund all your nodes to get started
make fund-local-all
# start local HOPR admin in a container (and put into background)
make run-hopr-admin &
Running one node in test mode, with safe and module attached (in anvil-localhost network)
# clean up, e.g.
# make kill-anvil
# make clean
# build deps and HOPRd code
make -j deps && make -j build
# starting network
make run-anvil
# update protocol-config
scripts/update-protocol-config.sh -n anvil-localhost
# create identity files
make create-local-identity
# create a safe and a node management module instance,
# and passing the created safe and module as argument to
# run a test node local (separate terminal)
# It also register the created pairs in network registry, and
# approve tokens for channels to move token.
# fund safe with 2k token and 1 native token
make run-local-with-safe
# or to restart a node and use the same id, safe and module
# run:
# make run-local id_path=$(find `pwd` -name ".identity-local*.id" | sort -r | head -n 1)
# fund all your nodes to get started
make fund-local-all id_dir=`pwd`
# start local HOPR admin in a container (and put into background)
make run-hopr-admin &
Running one node in test mode, with safe and module attached (in rotsee network)
# build deps and HOPRd code
make -j deps && make -j build
# ensure a private key with enough xDAI is set as PRIVATE_KEY
# Please use the deployer private key as PRIVATE_KEY
# in `packages/ethereum/contract/.env`
source ./packages/ethereum/contracts/.env
# create identity files
make create-local-identity
# create a safe and a node management module instance,
# and passing the created safe and module as argument to
# run a test node local (separate terminal)
# It also register the created pairs in network registry, and
# approve tokens for channels to move token.
# fund safe with 2k wxHOPR and 1 xdai
make run-local-with-safe-rotsee network=rotsee
# or to restart a node and use the same id, safe and module
# run:
# make run-local network=rotsee id_path=$(find `pwd` -name ".identity-local*.id" | sort -r | head -n 1)
# fund all your nodes to get started
make fund-local-rotsee id_dir=`pwd`
# start local HOPR admin in a container (and put into background)
make run-hopr-admin &
The best way to test with multiple HOPR nodes is by using a local cluster of interconnected nodes. See how to start your local HOPR cluster.
We use mocha for our tests. You can run our test suite across all packages using the following command:
make test
To run tests of a single package (e.g. hoprd) execute:
make test package=hoprd
To run tests of a single test suite (e.g. Identity) within a package (e.g. hoprd) execute:
For instance, to run only the Identity
test suite in hoprd
, you need to
run the following:
yarn --cwd packages/hoprd test --grep "Identity"
In a similar fashion, our contracts can be tested in isolation. For now, you need to pass the file to be tested, as hardhat does not support --grep
yarn test:contracts test/HoprChannels.spec.ts
In case a package you need to test is not included in our package.json
,
please feel free to update it as needed.
To make sure we add the least amount of untested code to our codebase,
whenever possible all code should come accompanied by a test. To do so,
locate the .spec
or equivalent test file for your code. If it does not
exist, create it within the same file your code will live in.
Afterwards, ensure you create a breaking test for your feature. For example, the following commit added a test to a non-existing feature. The immediate commit provided the actual feature for that given test. Repeat this process for all the code you add to our codebase.
(The code was pushed as an example, but ideally, you only push code that has working tests on your machine, as to avoid overusing our CI pipeline with known broken tests.)
We run a fair amount of automation using Github Actions. To ease development of these workflows one can use act to run workflows locally in a Docker environment.
E.g. running the build workflow:
act -j build
For more information please refer to act's documentation.
Tests are using the pytest
infrastructure that can be set up inside a virtualenv using as:
python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install -r tests/requirements.txt
To deactivate the activated testing environment if no longer needed:
deactivate
With the environment activated, execute the tests locally:
python3 -m pytest tests/
The deployment nodes and networks are mostly orchestrated through the script
files in scripts/
which are executed by the Github Actions CI workflows.
Therefore, all common and minimal networks do not require manual steps to be
deployed.
However, sometimes it is useful to deploy additional nodes or specific versions
of hoprd
. To accomplish that its possible to create a cluster on GCP using the
following scripts:
./scripts/setup-gcloud-cluster.sh dufour my-cluster 10
Read the full help information of the script in case of questions:
./scripts/setup-gcloud-cluster.sh --help
The script requires a few environment variables to be set, but will inform the
user if one is missing. It will create a cluster of 6 nodes. By default these
nodes will use the latest Docker image of hoprd
and run on the Goerli
network. Different versions and different target networks can be configured
through the parameters and environment variables.
A previously started cluster can be destroyed, which includes all running nodes, by using the same script but setting the cleanup switch:
HOPRD_PERFORM_CLEANUP=true \
./scripts/setup-gcloud-cluster.sh my-cluster 3
As some tools are only partially supported, please tag the respective team member whenever you need an issue about a particular tool.
Maintainer | Technology |
---|---|
@tolbrino | Nix |
GPL v3 © HOPR Association