With QueryApi you can
The majority of the QueryApi components can be set up locally using Docker. For this purpose, a Docker Compose file has been provided. However, the local system still relies on the NEAR Mainnet, rather than running on a localnet.
QueryApi requires AWS credentials to stream blocks from NEAR Lake. Credentials are exposed via the following environment variables, which can be found in the Docker Compose file:
Runner:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Coordinator:
LAKE_AWS_ACCESS_KEY
LAKE_AWS_SECRET_ACCESS_KEY
QUEUE_AWS_ACCESS_KEY
QUEUE_AWS_SECRET_ACCESS_KEY
These should be populated with your credentials. In most cases, the same key pair can be used for all 3 sets of credentials. Just ensure the keys have permissions to access S3 for handling Requestor Pays in Near Lake.
Hasura contains shared tables for e.g. logging and setting arbitrary state. These tables must be configured prior to running the entire QueryApi application. Configuration is stored in the hasura/
directory and deployed through the Hasura CLI.
To configure Hasura, first start it with:
docker compose up hasura-graphql --detach
And apply the configuration with:
cd ./hasura && hasura deploy
With everything configured correctly, we can now start all components of QueryApi with:
docker compose up
dev-queryapi.dataplatform.near
). To use a different contract, you can update the REGISTRY_CONTRACT_ID
environment variable.It is expected to see some provisioning errors from Runner
when starting QueryAPI for the first time. These occur when multiple indexers under the same account attempt to provision the same shared infrastructure. These should self resolve after a few seconds.