On 19, there are a couple different types of entity; 'miner' and 'validator'. 'Miners' have the task of running inference of AI models for validators - and get rewarded proportional to how good they are. Validators have the task of judging how good miners are - and are free to use this AI inference however they like.
There are two ways a validator assess a miner:
By creating a query for a miner [synthetic]
By passing on a real query from a user to the miners [organic].
Since the main goal of this system is to provide a stable inference product, we want to ideally pick the 'best' miners when we are serving a real query. We then would want synthetics to supplement these, to make sure we have a good opinion of all miners.
Task statement
Improve the selection process for miners, to get the best product possible for the frontend, whilst also making sure we have enough data on each miner to judge them.
Relevant code
Note: Each miners runs a variety of different models, and their score is a sum of these different models. A Miner + Task is called a 'contender'.
The relevant bit here, is that we have the scores for each contender from the last cycle in the db here. The scores here are a very good assessment of which miner is 'good'.
Deliverables
Supporting doc / PR description explaining your solution. Why is your solution great for the product, and how have you made sure edge cases are covered?
PR to deliver these changes
[Optional] Review of another solution
Good solutions are well thought out & not overly complex.
Advice
There are docs here about how to get started. You almost certainly will NOT need to run the whole system, and can do quite a lot by mocking. You can probably do quite a lot by thinking.
Favour very simple solutions where possible - but make sure they scale well. The query nodes should be scalable, there will be many of those. There's only 1 control node needed. You have Redis & postgresql.
Try to keep the code similar to the rest of the codebase (e.g. functional code, typehinted, etc).
Improve organic selection of contenders
Background
Subnet 19 provides is an API inference provider.
On 19, there are a couple different types of entity; 'miner' and 'validator'. 'Miners' have the task of running inference of AI models for validators - and get rewarded proportional to how good they are. Validators have the task of judging how good miners are - and are free to use this AI inference however they like.
There are two ways a validator assess a miner:
Since the main goal of this system is to provide a stable inference product, we want to ideally pick the 'best' miners when we are serving a real query. We then would want synthetics to supplement these, to make sure we have a good opinion of all miners.
Task statement
Improve the selection process for miners, to get the best product possible for the frontend, whilst also making sure we have enough data on each miner to judge them.
Relevant code
Note: Each miners runs a variety of different models, and their score is a sum of these different models. A Miner + Task is called a 'contender'.
How we currently decide which contenders to query
How we decide the scores of each contender
The relevant bit here, is that we have the scores for each contender from the last cycle in the db here. The scores here are a very good assessment of which miner is 'good'.
Deliverables
Good solutions are well thought out & not overly complex.
Advice
There are docs here about how to get started. You almost certainly will NOT need to run the whole system, and can do quite a lot by mocking. You can probably do quite a lot by thinking.
Favour very simple solutions where possible - but make sure they scale well. The query nodes should be scalable, there will be many of those. There's only 1 control node needed. You have Redis & postgresql.
Try to keep the code similar to the rest of the codebase (e.g. functional code, typehinted, etc).
Good luck!