Open tnn1t1s opened 3 years ago
A naive implementation of payout might be the following. Assume model_0
is the model before processing the contributors
training data and model_1
is the model after. The payout to the contributor
for their training data should be proportional to rms_model1 - rms_model0
, however, if implemented as is, early contributors would yield outsized results as the model will eventually converge on accuracy. For example, the first round of training might result in a model that is 80% more accurate than the initial model. While subsequent rounds might only yield small results. We'd want to reward these equally, or even reward later stage model improvements more than earlier.
Design an ETH contract that democratizes the training of machine learning models. The contract will have two methods: train(double[]) -> bool, predict(double[]) -> response. For this tutorial, we'll use a linear model to learn a simple relationship between the data and the response.
Each
contributor
will get a share of the contract's total value paid periodically based on the value of their submitted training data, where value is proportional to the change in RMS (or some other measure) between the two models M_d_0, M_d_1. Eachuser
will pay a small fee to the contract for every call to predict.Note: this isn't really a practical use of ETH Layer 1 due to scalability but is still useful as a tutorial. A workaround to the scale might be to have
predict
return a link to an API gateway including an API key. The API key could be sent to an off-chain prediction service using Chainlink and the prediction service could manage users volume, expiration, etc. We could also use Chainlink alarm clock to implement monthly billing for services.