TARGON (Bittensor Subnet 4) is a redundant deterministic verification mechanism that can be used to interpret and analyze ground truth sources and a query.
NOTICE: Using this software, you must agree to the Terms and Agreements provided in the terms and conditions document. By downloading and running this software, you implicitly agree to these terms and conditions.
v1.9.9 - removed tier system, implemented exponential tok/s reward scaling
v1.0.6 - Runpod is now supported 🎉. Check out the runpod docs in docs/runpod/verifier.md and docs/runpod/prover.md for more information.
v1.0.0 - Runpod is not currently supported on this version of TARGON for verifiers. An easy alternative can be found in the Running on TensorDock section.
Currently supporting python>=3.9,<3.11.
Note: The storage subnet is in an alpha stage and is subject to rapid development.
The following table shows the VRAM, Storage, RAM, and CPU minimum requirements for running a verifier or prover.
GPU - A100 | Provider | VRAM | Storage | RAM | CPU |
---|---|---|---|---|---|
TensorDock | 80GB | 200GB | 16GB | 4 | |
Latitude | 80GB | 200GB | 16GB | 4 | |
Paperspace | 80GB | 200GB | 16GB | 4 | |
GCP | 80GB | 200GB | 16GB | 4 | |
Azure | 80GB | 200GB | 16GB | 4 | |
AWS | 80GB | 200GB | 16GB | 4 | |
Runpod | 80GB | 200GB | 16GB | 4 |
The following table shows the suggested compute providers for running a verifier or prover.
Provider | Cost | Location | Machine Type | Rating |
---|---|---|---|---|
TensorDock | Low | Global | VM & Container | 4/5 |
Latitude | Medium | Global | Bare Metal | 5/5 |
Paperspace | High | Global | VM & Bare Metal | 4.5/5 |
GCP | High | Global | VM & Bare Metal | 3/5 |
Azure | High | Global | VM & Bare Metal | 3/5 |
AWS | High | Global | VM & Bare Metal | 3/5 |
Runpod | Low | Global | VM & Container | 5/5 |
In order to run TARGON, you need to install Docker, PM2, and TARGON package. The following instructions apply only to Ubuntu OSes. For your specific OS, please refer to the official documentation.
git clone https://github.com/manifold-inc/targon.git
cd targon
python3 -m pip install -r requirements.txt
python3 -m pip install -e .
You have now installed TARGON. You can now run a prover or verifier.
Using existing datasets that are public poses certain challenges to rewarding models for work. Those who seek to win overfit their models on these inputs, or can front run the input by using a lookup for the output. A solution to this challenge can be done using prompt generation using a query generation model, and a private input.
The private input will be sourced from an api ran by Manifold, and this input is rotated every twelve seconds, and is authenticated with a signature using the validator’s keys. The private input is fed into the query generation model, which can be run by the validator or as a light client by manifold. The data source can be either from a crawl or from RedPajama.
The query, private input, and a deterministic seed are used to generate a ground truth output with the specified model, which can be run by the validator or as a light client. The validator then sends requests to miners with the query, private input, and deterministic seed. The miner output are compared to the ground truth output. If the tokens are equal, the miner has successfully completed the challenge.
A prover is a node that is responsible for generating a output from a query, private input, and a deterministic sampling params.
A verifier is a node that is responsible for verifying a prover's output. The verifier will send a request to a prover with a query, private input, and deterministic sampling params. The miner will then send back a response with the output. The verifier will then compare the output to the ground truth output. If the outputs are equal, then the prover has completed the challenge.
A challenge request is a request sent by a verifier to a prover. The challenge request contains a query, private input, and deterministic sampling params. The prover will then generate an output from the query, private input, and deterministic sampling params. The prover will then send the output back to the verifier.
An inference request is a request sent by a verifier to a prover. The inference request contains a query, private input, and inference sampling params. The prover will then generate an output from the query, private input, and deterministic sampling params. The prover will then stream the output back to the verifier.
CAVEAT: Every Interval (360 blocks) there will be a random amount of inference samples by the verifier. The verifier will then compare the outputs to the ground truth outputs. The cosine similarity of the outputs will be used to determine the reward for the prover. Failing to do an inference request will result in a 5x penalty.
To get started running a prover, you will need to run the docker containers for the requirements of the prover. To do this, start with a template by runig the following command:
cp neurons/prover/docker-compose.example.yml neurons/prover/docker-compose.yml
This by default includes the following containers:
experimental Experimentally, you can uncomment the subtensor service in the template. The subtensor could also be set up locally on the host machine, external from docker. Otherwise, for running an external subtensor instance, whether locally on the machine or remote, you will want to make sure the prover starts up with the flag --subtensor.chain_endpoint ws://the.subtensor.ip.addr:9944
to connect to the chain.
You can optionally shard the model across multiple GPUs. To do this, you will need to modify the docker template you copied above and include these flags at the end of the command within the service.
NOTE: Scroll horizontally to see the full command if this readme is truncated by the viewport.
version: '3.8'
services:
text-generation-service:
image: ghcr.io/huggingface/text-generation-inference:1.3
command: --model-id mlabonne/NeuralDaredevil-7B --max-input-length 3072 --max-total-tokens 4096 --sharded --num-shard 2
volumes:
- ./models:/data
ports:
- "127.0.0.1:8080:80"
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
device_ids: ["0","1"]
shm_size: '1g'
NOTE: Sharding is not set by default in the template, so you will need to modify it accordingly prior to starting the container.
The add_prover_args function in the targon/utils/config.py file is used to add command-line arguments specific to the prover. Here are the options it provides:
--neuron.name: This is a string argument that specifies the name of the neuron. The default value is 'prover'.
--blacklist.force_verifier_permit: This is a boolean argument that, if set, forces incoming requests to have a permit. The default value is False.
--blacklist.allow_non_registered: This is a boolean argument that, if set, allows provers to accept queries from non-registered entities. This is considered dangerous and its default value is False.
--neuron.tgi_endpoint: This is a string argument that specifies the endpoint to use for the TGI client. The default value is "http://0.0.0.0:8080".
To get started running a verifier, you will need to be running the docker containers for the requirements of the verifier. To do this, run the following command:
cp neurons/verifier/docker-compose.example.yml neurons/verifier/docker-compose.yml
./scripts/generate_redis_password.sh
nano neurons/verifier/docker-compose.yml # replace YOUR_PASSWORD_HERE with the password generated by the script
docker compose -f neurons/verifier/docker-compose.yml up -d
this includes the following containers:
IMPORTANT: You will need to edit the docker-compose.yml file with your new password. You can do this by:
./scripts/generate_redis_password.sh
this will output a secure password for you to use. You will then need to edit the docker-compose.yml file and replace the password with your new password.
first, edit the docker-compose.yml file:
nano neurons/verifier/docker-compose.yml
then replace the password with your new password:
redis:
image: redis:latest
command: redis-server --requirepass YOUR_PASSWORD_HERE
ports:
- "6379:6379"
optionally, you can edit the docker-compose.yml file to include the verifier container, but you will need to edit the docker-compose.yml file and uncomment the verifier container. Otherwise you can run the verifier with PM2, which is experimental.
Project maintainers reserve the right to weigh the opinions of peer reviewers using common sense judgement and may also weigh based on merit. Reviewers that have demonstrated a deeper commitment and understanding of the project over time or who have clear domain expertise may naturally have more weight, as one would expect in all walks of life.
Where a patch set affects consensus-critical code, the bar will be much higher in terms of discussion and peer review requirements, keeping in mind that mistakes could be very costly to the wider community. This includes refactoring of consensus-critical code.
Where a patch set proposes to change the TARGON subnet, it must have been discussed extensively on the discord server and other channels, be accompanied by a widely discussed BIP and have a generally widely perceived technical consensus of being a worthwhile change based on the judgement of the maintainers.
As most reviewers are themselves developers with their own projects, the review process can be quite lengthy, and some amount of patience is required. If you find that you've been waiting for a pull request to be given attention for several months, there may be a number of reasons for this, some of which you can do something about: