Closed Luka-Loncar closed 3 months ago
We can create separate spike tickets for each endpoint if needed.
What are we trying to achieve here? Do we want the miners to reject "bad data" (whatever that means)? If they do, they will get their rewards slashed by the validators. In that case do we even care about validation if the result is the same in the end?
What are we trying to achieve here? Do we want the miners to reject "bad data" (whatever that means)? If they do, they will get their rewards slashed by the validators. In that case do we even care about validation if the result is the same in the end?
@obasilakis this is from the POV of the validator not the miner. Miners implement their code as they see fit to perform the task and get the highest reward. Validators however need to process those responses and score them and to give a good/bad score we need to define criteria of validation for each task. My understanding of this ticket is that we need to define the criteria to be used.
isn't that what this ticket is about? (coming up with the rewards mechanism)
isn't that what this ticket is about? (coming up with the rewards mechanism)
This card is the spike - #11 is the implementation card
isn't that what this ticket is about? (coming up with the rewards mechanism)
This card is the spike - #11 is the implementation card
sorry - scope of cards changed quickly. #11 tells about reward logics in general
@obasilakis can you create the follow-up implementation cards so we can pick these up in the next sprint? thank you :pray:
cc @Luka-Loncar
@obasilakis Small reminder about the follow up tasks.
@obasilakis If you want to chat about deepening these or some further context around validation feel free to ping me though it may be beyond the scope of this work. For example utilizing simple Pydantic structures as a validation step beyond completeness.
Reopening as we are missing follow-ups here.
@Luka-Loncar and @mudler, this ticket was created to have a "centralized" place where the data validation for all endpoints is summarized. There won't be any follow-up tasks for it.
@Luka-Loncar and @mudler, this ticket was created to have a "centralized" place where the data validation for all endpoints is summarized. There won't be any follow-up tasks for it.
Ok cool - are all implemented ? shall we have optimizations follow ups? or we are good to go with how things are in the code?
I guess this can be considered an optimization follow up :)
The problem here is we currently haven't defined the way we confirmed that data is valid or not and we need to cut out the solution for that there's probably an individual solution for each data type right and these are then to be explored.
Reward Calculation Summary for Validator Endpoints
Web Validator (web/reward.py):
Checks if the pages attribute in the response is not empty. Returns a reward of 1 if the length of pages is greater than 0, otherwise 0.
Twitter Followers Validator (twitter/followers/reward.py):
Returns a reward of 1 if the length of the response list is greater than 0, otherwise 0.
Twitter Tweets Validator (twitter/tweets/reward.py):
Similar to the followers validator, it returns a reward of 1 if the length of the response list is greater than 0, otherwise 0.
Twitter Profile Validator (twitter/profile/reward.py):
Checks if the username in the response matches the query and that userID is not None. Returns a reward of 1 if both conditions are met, otherwise 0.
Discord Profile Validator (discord/profile/reward.py):
Checks if the ID in the response matches the query. Returns a reward of 1 if they match, otherwise 0.