OriginTrail / OT-RFC-repository

14 stars 3 forks source link

Discussion for OT-RFC-14 #21

Closed branarakic closed 1 year ago

branarakic commented 1 year ago

Hi Tracers,

I'm excited to share the latest tokenomics RFC with you.

OT-RFC-14 DKG v6 TRAC tokenomics

As always your feedback and comments are much appreciated. Trace on!

mucke12 commented 1 year ago

Do I get this right "Security signal criterion: after both previous criteria have been satisfied, finally R0 nodes get chosen (R0 being the minimum replication factor) out of the R1 set, based on the amount of provided TRAC token stake. A higher stake of TRAC tokens in the system indicates higher quality of service guarantees by nodes, as the stake represents a collateral guarantee for nodes providing expected services and acting according to service agreements."

even if the previous criteria have been satisfied I won't win publishing against these high stakes? (or is there a % where high stakes have better chance to win)

Valcyclovir commented 1 year ago

Still reading, but noticed a potential typo ? dkg: [// DID] / UAI [“?” query] [“#” fragment] You mean UAL ?

mucke12 commented 1 year ago

Will be 50k trac needed for every chain(as a node runner)? Can I also delegate my trac to node runners from ETH/xdai/Poly?

u-hubar commented 1 year ago

Do I get this right "Security signal criterion: after both previous criteria have been satisfied, finally R0 nodes get chosen (R0 being the minimum replication factor) out of the R1 set, based on the amount of provided TRAC token stake. A higher stake of TRAC tokens in the system indicates higher quality of service guarantees by nodes, as the stake represents a collateral guarantee for nodes providing expected services and acting according to service agreements."

even if the previous criteria have been satisfied I won't win publishing against these high stakes? (or is there a % where high stakes have better chance to win)

You got it right, R0 set will consist of nodes with highest stakes from R1 set. The idea behind this is slashing for relatively higher amount of tokens for nodes with higher stakes from R0 set in case of failures in providing services.

Nevertheless, it's beneficial to store the data even if you are not chosen for rewards, because you can always replace nodes from R0 if they will go offline for a longer period of time.

botnumberseven commented 1 year ago

Node selection process. Security signal criterion: after both previous criteria have been satisfied, finally R0 nodes get chosen (R0 being the minimum replication factor) out of the R1 set, based on the amount of provided TRAC token stake. A higher stake of TRAC tokens in the system indicates higher quality of service guarantees by nodes, as the stake represents a collateral guarantee for nodes providing expected services and acting according to service agreements.

This part concerns me as it's “all or nothing” system. Let’s say there are 2 nodes asking the same compensation. And one has 50000 TRAC stake and the other has 50001 TRAC stake. This difference in stake does not tell anything about the quality of service of these nodes. However, 2nd node wins all and 1st node wins nothing (never). I expect this will lead to a process when nodes constantly try to change their stake in order they can be 1 TRAC above others. Which does not help overall system performance of stability. Also this approach incentivized for meganodes with delegation, as smaller nodes win nothing. And meganodes do not exactly help with decentralization, right?

One potential alternative is a probabilistic system. Using example from RFC: We have 6 nodes with low enough ask (column 2) and their respective stakes. And the system allows any of them to be selected and the chance just depends on the stake size. E.g. nodeid(k+2) chance to be selected is proportional to its stake size - 5000 / (5000+6500+10000+9000+3000+50000) = 5.9% If R0 = 3, then we calculate a chance of winning by every node, throw a dice and see how gets it. That's our 1st winner. Then the first winner is excluded from the pool, chances of winning per node are calculated again (based on a smaller pool now) and we throw a dice 2nd time. And then similar for the 3rd time. In this case stake size still matters a lot, however smaller nodes are not pushed out of the game. They just win less.

botnumberseven commented 1 year ago

Slashing. I absolutely agree there should be a mechanism which incentivizes the proper behavior of node runners. Slashing could be that mechanism. But it needs to be modeled extremely carefully, as poorly designed slashing could break the whole system. As the risk of losing funds will outweigh the reward and there will be no nodes long-term. Try to apply your slashing mechanism to some real-world scenarios:

1) Datacenter downtime for a couple of hours. So node is not responsive for two hours. What happens? 5% of stake is locked for every assert test. How many tests will be within 2 hours window when we have a full adoption? What percentage of stake node will lose in these 2 hours? Keep in mind, based on v5 nodes experience we learned TRAC nodes could be either barely profitable or often net negative. So all existing TRAC node runners either run them at home or use dirt cheap VPS provides, real bottom of the barrel stuff. And we do not expect 99.99% uptime on them (like Azure or AWS are committed to). Obviously that can change if node operators can see the proof the nodes are actually profitable.

2) Datacenter downtime for 2 days. What happens?

3) OVH datacenter is on fire. My node is not responsive for several hours + my node needs to be recovered from the backup (assuming there will be a decent backup mechanism). If my backup is 1 day old (or 1 week old). What happens if we follow the slashing mechanism and specific values you suggested?

botnumberseven commented 1 year ago

Slashing and delegation. I doubt slashing and delegation can work together very well. Just try to put some real world numbers there from an investment point of view (as this is what delegation essentially is). E.g. ETH POS staking thru big exchanges gives you around 5% per year (risk to lose funds is extremely low). TRAC delegation (even without any slashing) carries much more risk just because of how established ETH vs TRAC. So the expected reward should be significantly higher, like 10% at least. Adding slashing on top as an extra risk would push the required reward even further up. As higher risk can be justified only by a higher reward. Do you envision delegation can make enough APY long-term to justify the slashing risk? If No, then it’s the same as have no delegation mechanism at all.

Also keep in mind the unknown factor. Potential delegators will know the risk is there. Will they have tools to quantify it? If No, then many will decide to play it safe and not participate in delegation at all. Just because we pay much more attention to potential losses than potential wins.

botnumberseven commented 1 year ago

Node retirement. I do not see anything in RFC related to the node retirement process. There should be a well-defined mechanism for how the node can be retired gracefully. We do not have it in v5. But since we expect a significant adoption in v6 it’s an absolutely must have feature. I envision how the lack of a graceful node retirement process will be a show stopper for some potential node runners. They are asked to commit a significant capital (50k TRAC even at $0.20 is $10k) without a defined process of how to get out of this commitment.

It should be possible to retire a node in a reasonable time (within hours, not days). Other nodes should replace the retired one for all the assets. The new nodes should be compensated for the rest of the service period. While a retired node gets the compensation for the epochs it served during the retirement process. So the owner of the retired node should not wait for the service period to end in order to get their portion. There will be more requirements to the process, but I think Tokenomics RFC is the right place to talk about it. At least the TRAC/OTP portion of the retirement.

botnumberseven commented 1 year ago

Node backup. Node runners can always do an off-chain / off-ODN backup. And this is what we do for v5, but is/was a significant pain point for many v5 node runners. But what about using ODN itself as a backup? All the data is in ODN already, right? I means ODN by definition is the most up-to-date backup for all the nodes at the same time. And blockchain knows who stores what, right?

Let’s say I have a node with assets at OVH. Having my breakfast I see how OVH data center is burning in flames on my favorite news channel. So I can just spin a node at a different VPS provider with my existing node id and wallets. Then the node replicates all the assets it is supposed to store from other nodes (which have the copy of the data). I know it moves some backup work from node runners to node developers. But in the end we will have a DKG which is much more robust with much less penalties. I’m highlighting it in the Tokenomics discussion as if we have an on-DKG backup system, then slashing won’t be needed as often. It'll be just a question of a down-time vs a question of down-time and how old is the latest backup.

botnumberseven commented 1 year ago

TRAC lock at Data Holders. I'm trying to understand if the concept of locked tokens at Data Holder side is still a thing for v6? When a node wins an assertion, would the node stake change after that (like it is for v5)? Or it stays the same. So for the next assertion the node will bid with exactly the same stake size (Security Signal Criterion).

CosmiCloud commented 1 year ago

@botnumberseven has some very good points. I especially agree with 1. The slashing penalty is too steep. Could there be a way for the node to "redeem" itself by performing well for long enough after an outage to unlock the slashed tokens? Even so 5% on each missed epoch could wreck you with any kind of prolonged outage. Would this mean missing 1 epoch puts the node under the 50k limit so we would need buffer incase any slashing did occur? 2. There 100% needs to be a graceful way to exit the network. The rest of they're points are extremely valid and should be considered.

mucke12 commented 1 year ago

Node selection process. .......ents.

This part concerns me as it's “all or nothing” system. Let’s say there are 2 nodes asking the same compensation. And one has 50000 TRAC stake and the other has 50001 TRAC stake. This difference in stake does not tell anything about the quality of service of these nodes. However, 2nd node wins all and 1st node wins nothing (never). I expect this will lead to a process when nodes constantly try to change their stake in order they can be 1 TRAC above others. Which does not help overall system performance of stability. Also this approach incentivized for meganodes with delegation, as smaller nodes win nothing. And meganodes do not exactly help with decentralization, right?

One potential alternative is a probabilistic system. Using example from RFC: We have 6 nodes with low enough ask (column 2) and their respective stakes. And the system allows any of them to be selected and the chance just depends on the stake size. E.g. nodeid(k+2) chance to be selected is proportional to its stake size - 5000 / (5000+6500+10000+9000+3000+50000) = 5.9% If R0 = 3, then we calculate a chance of winning by every node, throw a dice and see how gets it. That's our 1st winner. Then the first winner is excluded from the pool, chances of winning per node are calculated again (based on a smaller pool now) and we throw a dice 2nd time. And then similar for the 3rd time. In this case stake size still matters a lot, however smaller nodes are not pushed out of the game. They just win less.

Agree with you, also not so happy with the all or nothing approach. But I don't get your math, but I bet we could work on that part 😅. On the otherside I do get the incentive behind the idea of " all or nothing" , this could make sure that we get some big and serious Companys onboard as node runners.

botnumberseven commented 1 year ago

@mucke12 essentially my proposal is to allow anyone (with a low enough ask price - R1 group) have a chance to win the assertion. And the probability of every node to win is proportional to their stake size relatively to stake sizes of other nodes in group R1.

mucke12 commented 1 year ago

Slashing. I absolutely agree there should be a mechanism which incentivizes the proper behavior of node runners. Slashing could be that mechanism. But it needs to be modeled extremely carefully, as poorly designed slashing could break the whole system. As the risk of losing funds will outweigh the reward and there will be no nodes long-term. Try to apply your slashing mechanism to some real-world scenarios:

  1. Datacenter downtime for a couple of hours. So node is not responsive for two hours. What happens? 5% of stake is locked for every assert test. How many tests will be within 2 hours window when we have a full adoption? What percentage of stake node will lose in these 2 hours? Keep in mind, based on v5 nodes experience we learned TRAC nodes could be either barely profitable or often net negative. So all existing TRAC node runners either run them at home or use dirt cheap VPS provides, real bottom of the barrel stuff. And we do not expect 99.99% uptime on them (like Azure or AWS are committed to). Obviously that can change if node operators can see the proof the nodes are actually profitable.
  2. Datacenter downtime for 2 days. What happens?
  3. OVH datacenter is on fire. My node is not responsive for several hours + my node needs to be recovered from the backup (assuming there will be a decent backup mechanism). If my backup is 1 day old (or 1 week old). What happens if we follow the slashing mechanism and specific values you suggested?
  1. Agree
  2. Yeha we need clearer explanation on this
  3. Also on this part, we need clearer explanation from OT. But just want to add to this, the whole RFC seems to aim that people will need to run Nodes more like a business. Lazy times are over and if anyone even think about of doing only weekly backups, I don't think he should consider to run a node.

Slashing and delegation. I doubt slashing and delegation can work together very well. Just try to put some real world numbers there from an investment point of view (as this is what delegation essentially is). E.g. ETH POS staking thru big exchanges gives you around 5% per year (risk to lose funds is extremely low). TRAC delegation (even without any slashing) carries much more risk just because of how established ETH vs TRAC. So the expected reward should be significantly higher, like 10% at least. Adding slashing on top as an extra risk would push the required reward even further up. As higher risk can be justified only by a higher reward. Do you envision delegation can make enough APY long-term to justify the slashing risk? If No, then it’s the same as have no delegation mechanism at all.

Also keep in mind the unknown factor. Potential delegators will know the risk is there. Will they have tools to quantify it? If No, then many will decide to play it safe and not participate in delegation at all. Just because we pay much more attention to potential losses than potential wins.

Don't agree with you on the numbers here. In the end the APY depends on every node runner and the network (publishings). It is his choice how much he wants to share for delegators ( see Parachain withpaper). It also depends on his lambda and running cost. There are way to man unknowns for OT the calculate a generell APY. Also don't forget, we don't have any inflation where we can calculate our APY on ( ETH has).

Node retirement. I do not see anything in RFC related to the node retirement process. There should be a well-defined mechanism for how the node can be retired gracefully. We do not have it in v5. But since we expect a significant adoption in v6 it’s an absolutely must have feature. I envision how the lack of a graceful node retirement process will be a show stopper for some potential node runners. They are asked to commit a significant capital (50k TRAC even at $0.20 is $10k) without a defined process of how to get out of this commitment.

It should be possible to retire a node in a reasonable time (within hours, not days). Other nodes should replace the retired one for all the assets. The new nodes should be compensated for the rest of the service period. While a retired node gets the compensation for the epochs it served during the retirement process. So the owner of the retired node should not wait for the service period to end in order to get their portion. There will be more requirements to the process, but I think Tokenomics RFC is the right place to talk about it. At least the TRAC/OTP portion of the retirement.

Yeha we clearly need this in the long run.

Lagosta56 commented 1 year ago

Antes de tudo já me desculpo por não saber escrever em inglês. Sou apenas um analista Brasileiro que vem olhando o projeto com muito carinho, e tento entender toda essa tecnologia que aqui está envolvida. Sinceramente não estou entendo muito bem. O que sei é achar grandes oportunidades, sem dúvida aqui está a maior delas. Tudo que estamos vendo ser aplicado nesse ecossistema é realmente algo que me chamo atenção. Gostaria de parabenizar a todos, vocês tem em mãos o maior diamante já visto em toda história do mercado financeiro. Estamos de frente ao maior e mais valioso projeto de evolução da raça humana, algo jamais visto. Minha projeções chegam a apontar o valor de centenas de dólares em um movimento não muito longo.
Olhando um horizonte mais longo, vendo as parcerias e mentes que estão dedicadas a ter o maior produto de controle físico e computacional da história, chego a ver valores de milhares de dólares. Agora a pergunta que fica no meu mais humilde coração... o que vocês farão com tanto poder em suas mãos? Um grande abraço e que o futuro seja de bons atos!

UniMa007 commented 1 year ago

Hello TraceLabs Team,

At first, thanks for the great read! One can only imagine how much effort you put into the tokenomics and V6 in general!

I‘ve tried to put myself into the perspective of running a full node according to the described tokenomics and penalties. And ask myself under which conditions I would feel safe to put that amount at stake. But this is just my personal risk awareness.

I also tried to look at the worst case scenario to think purely profitable, like a capitalist would do 😀

This is my feedback for the service high availability

Based on the read my observation is:

The epoch time parameters becomes a huge leverage for the service (level) of full nodes. The shorter the epoch time is set, the higher the overall uptime requirements for full nodes become.

In order to have a realistic chance of scooping all rewards, I have to ensure that my node is up at 100% of all challenges. (However, we know unless you are AWS we are talking about 99% or less) The more challenges occur, the higher is the chance that my node can be non-responsive.

My conclusion is:

Depending on the epoch time and the surrounding parameters such as the opportunity costs of a missing payout and additionally slashing my stake, one will most probably have to run a High availability setup. As not just the Network needs self healing, but also my own nodes.

Looking at the enterprise grade requirements and my personal stake, my gut feeling is definitely to use a HA Setup.

Is it safe to assume ( Does that align with your opinion as well) that we will need a HA setup, or will the tokenomics and parameters be aligned to a single OT node Setup as it is right now?

If not, this would lead to a setup like

And then down the rabbit hole of single region vs multi region or multi cloud. A devOPs dream 😀

This would increase the infrastructure and operational cost quite a lot and ultimately impact the total cost of ownership (setup, maintenance, observation)

Questions:

I know this is not directly related to the tokenomics itself, but looks to me like a technical consequence of it. So I’m trying to break the tokenomics down to node operation to be able to challenge them in „real life scenarios“. But maybe a single node will be sufficient and then this comment is obsolete! ( however I assume a race for most stable full nodes will start)

Thanks again for the great work you are doing!

OTClub commented 1 year ago

OTC-RFC-14: OriginTrail Club on TRAC Tokenomics

Authors: OriginTrail Club

Contributors: BRX, Dmitry, LuKu, Milian, hottogo, K’walla, Calvin, Famous Amos

Date: Oct 14, 2022

Note: We thank BRX/otnoderunner for creating and gathering information used for this representative response.

Introduction

OriginTrail Club is actively gathering all community feedback concerning all RFCs by the core developer team of OriginTrail, assembling it together with our own point of view, and providing the team with an in-depth analysis of the subject matter.

Moving forward, we will thrive to represent the core community’s interest and allow every community member to be heard, including those who do not wish to open a GitHub account to comment.

Points of interest

  1. 50k TRAC minimum required per full node
  2. New stake slashing mechanism replacing litigations
  3. Current default stake slashing values: 5% of TRAC locked for 2 years
  4. Highest stake winning the bid
  5. Retiring a node
  6. Data replication on all R1
  7. Node backups
  8. Epochs lengths
  9. Lambda

Feedback

1. 50k TRAC minimum required per full node

The general consensus seems to be a positive one regarding the increased amount to run a node. Given the current market conditions, this is equivalent to 8,500$ USD as a minimum amount, which could be a hurdle for smaller TRAC holders. Smaller TRAC holders might also not attract enough delegators to run a node. In addition, the concern with this higher amount would be whether the number of nodes would be enough to meet sufficient decentralization across all neighborhoods (i.e. max amount of nodes would be probably around 5-6k) and whether that number can change based on a higher token price in the future (i.e. at 6$ TRAC token price, 300,000$ USD would be required to run a full node). Is there a mechanism for this 50k tokens amount to change? Will it be an automatic mechanism or manual? Will we know the parameters?

2. New stake slashing mechanism replacing litigations

This is a welcome change as litigations prior to v5 would punish the node runner harder by completely removing the node runner’s stake. Slashing, or locking up a certain amount of tokens for a fixed duration, rendering the tokens useless in the meantime, serves as a good deterrent to node runners and promotes good behavior. In the community, it is an unanimous agreement that slashing instead of litigation is an improvement to the ecosystem.

The disagreement lies within the stake slashing default values.

3. Current default stake slashing values: 5% of TRAC locked for 2 years

These values seem to be quite high without knowing the specifics. Node runners need to know a clearly defined definition of stake slashing. In other words, how long can a node remain offline without triggering a slash ? How does a node know, despite being online, that it is functional and detected by the network ? Is slashing multiplicative or additive ? Is slashing possible by several assets I hold, several times a day, or is it capped to one slashing per 24 hours ? Will the slashed amount be affecting delegators as well or just the full node runner ?

A poorly designed slashing method could break the entire ecosystem. The benefits of running a node must always outweigh the risk of locking up funds. In real life, no centralized system is up 100% of the time. The stake slashing must have more metrics to distinguish bad actors from accidental outages.

In the event of a datacenter fire or any events causing a complete loss of data, how does a full node recover from that after rebooting using the same credentials, but without any datasets ? If a backup is available but is 1 week old, how likely is slashing if 1 week of assets have been lost ? These are all questions that need to be answered for node runners to better plan their node maintenance.

There is an agreement in the community that both metrics (5% of TRAC locked for 2 years) seem a bit high, without knowing more details. For example, failing an epochs check for an assertion worth 0.01 TRAC and risking a lock for 2 years of 2500 TRAC (for a 50k full node) seems a bit high of a punishment, if that is how slashing works. The stake slashing must be relative to the assertion amount and duration and be a lot more reasonable. 2 years in crypto time is a lot and way too big of a deterrent for node runners - a number such as 6 months should be considered instead. 5% of available TRAC for a 50k TRAC full node is 2500 TRAC which seems a bit high if one low paying service can trigger a slash.

What happens to your node if it has the minimum stake of 50k tokens and gets slashed by 5%? Is it disqualified from bidding for service agreements? Is the slashing pro-rata to the delegated trac or is it pro-rata to how the rewards are paid out? i.e. if the Node runner owns 50% of the stake but gets 75% of the reward, and the delegators own 50% of the stake but gets 25% of the reward (example numbers), does the slashing hit the node runner and delegator equally or do they split it 75:25 in line with what their rewards would have been?

A hold in the stake slashing mechanism should also be considered at the start of V6 mainnet to make sure the slashing mechanism operates as designed in production.

4. Highest stake winning the bid

This is a hot topic among the node runner community. This black-or-white system might push node runners to always one up their peers. For instance, node runner X with 50,000 TRAC will add 1 TRAC to beat node runner Y with 50,000 TRAC in the same neighborhood to win every single assertion when bidding against each other. The difference in their staked amount does not necessarily represent their quality of service and slashing, if it happens, would almost represent an almost identical amount, making the risk similar but reward skewed towards node runner X. If this type of behavior continues, we will find ourselves with a huge amount of mega nodes and hurting decentralization in the end.

A great alternative would be to erase the all-or-nothing system and replace it with a probabilistic system.

Here is an example:

The total R1 (set of nodes bidding on the assertion) has a total amount of 500,000 TRAC. Each node has a fixed probability of becoming R0 (winning the assertion).

R1:

Node A = 50,000 TRAC = 50/500 = 10%

Node B = 50,000 TRAC = 50/500 = 10%

Node C = 100,000 TRAC = 100/500 = 20%

Node D = 150,000 TRAC = 150/500 = 30%

Node E = 150,000 TRAC = 150/500 = 30%

The dices are thrown and the R0=3 group becomes: Node B,C,E

An alternative method would have the dice thrown 3 times (if R0=3) instead of once, with each throw removing the previous winner.

Another issue with the above example is how each node is pitted against a randomly assigned neighborhood. Node E may have 150,000 TRAC with a 30% chance of winning the service, however if randomly assigned to another neighborhood with much bigger nodes, that chance can go down to 10% such as Node A (example number). In other words, a node can have either 30% or 10% chance of winning a service based on a completely random factor. The win percentage could be based on all nodes in all neighborhoods to make it fair. The neighborhood should be used to determine “opportunistic” assertion replications only, not win percentage.

5. Retiring a node

More details are needed to understand the process of retiring a node without triggering a slash. There needs to be a way to remove our commitment in a reasonable amount of time and responsibly.

In order to balance supply and demand, both supply and demand must be flexible. As seen in versions prior to V6, node numbers did not go down despite lower demand, simply due to the fact that node runners were obliged to finish their jobs or risk being litigated, so they might as well keep accepting new jobs. This caused a race to the bottom with nodes accepting jobs for a negative return which shouldn't happen on this next iteration of the DKG.

Now, with the introduction of epochs checkpoints where node runners get rewarded after proving that they did their job, it is a perfect opportunity to allow the full node to retire responsibly and allowing R1 to bid on the retiring node’s ongoing services. That way, it will not cause bad behavior (since the node still risks getting slashed if not able to provide proof of service), while promoting responsible continuation of the service through another bidding within the neighborhood and allowing the node to retire safely. This, in turn, will allow the supply of nodes to go down in a flexible manner to match demand if node returns are low.

6. Assertions replication on all R1

It would be ideal if we get some clarity on what is the process of node assignments to neighborhoods in both initial node inception, and later when a new neighborhood is created. How would the system decide whether more nodes are required in a certain neighborhood ? What if the neighborhood depreciates ? And how would the DKG ensure all nodes are having fair access to opportunities to host data uploaded on the DKG, so a full node does not end up in some forsaken N that gets no new assertions ?

Full node runners must also have the option to choose whether they want to hold “opportunistic” assertion replications within their neighborhood. Currently, all of N are forced to replicate the same assertions while only a select few, R0, are being rewarded. The full node runner must be offered the choice to host assertions opportunistically in order to be able to perform “self-healing” of replicas or not.

Another point of interest on this neighborhood system is how node runners could want to "re-roll" to close out their node and make a new one to try and get into a more favorable neighborhood. Will this occur often going forward? Will nodes want to move out when a bigger node moves in and impacts their returns? Will we be constantly re-making our nodes with new node IDs chasing the weakest neighborhoods? Do we need a promotion system to counteract this so a high performing node gets promoted into tougher neighborhoods like the premiere league?

7. Node backups

Node backups also need to be discussed and how quickly we need to restore a node in case of an outage. The team must provide more guidance on the best ways possible to backup in order to prevent slashing. Some community members are discussing the possibility of backing up the node on the DKG itself.

8. Epochs lengths

Epochs, or checkpoints during service, should be lengthy enough to allow node runners some lee-way to fix their node in case of an outage. We need more information about the epochs' length, and whether they are fixed for all assertions or not.

9. Lambda

Lambda has been a hot issue plaguing v5 node runners and hasn't been mentioned once in this RFC. Can we have a word about how lambda has been adapted to V6 ? There was also a RFC-06 discussing automated lambda. Is that approach still relevant and would it be included sometime after v6 is released? Is it possible to not allow a negative lambda or a lambda of 0?

Conclusion

Despite all the previous points, the overall community feedback towards OT-RFC-14 is a positive one. We understand this is an introductory RFC to the new tokenomics of V6 and are not given more technical details, however most of the aforementioned points would still hold true given similar circumstances, and we hope to continue working alongside the community and the core team to flesh out the best iteration of V6.

DalSlacker commented 1 year ago

I just want to post in support of the points raised by botnumberseven, UniMa and OTClub They address all the issues that I have with the RFC and I'm glad they posted so thoroughly (saving me the time of having to do it myself :)

UniMa007 commented 1 year ago

Got another time slot for feedback :)

As you outlined, it is possible to not just run a full node using your own stake, but also get stake delegated from others.

I really love that feature and I think will boost the tokenomics a lot! It will allow more decentralization and also government by design.

Also Here my feeling is, that there has to be a good decision about the maximal allowed ratio between your own tokens staked and the delegated tokens. This is an edge case, but isn‘t a frame limited by it’s edges? 😀

Let‘s say I have a node with 20K of my own tokens and get 80K delegated. The ratio is 1 staked : 4 delegated. By that time, delegated tokens will several times outnumber staked tokens causing a few consequences:

Depending on the implementation of Delegation to a node, a malicious node runner can abuse the system.

I firmly believe that we can rule out an exit scam, but let’s take a look at the extortion or node retirement.

Each 5% penalties will cause 1% of my own tokens and 4% of delegator tokens to be slashed. Same holds for missed challenges, 20% of missed token payouts for me, 80% missed token payouts for users.

Even if it only addresses the Total Locked Tokens on Assets, if we look at a high amount of locked tokens I can sense a scenario where if the node running itself won’t be profitable enough, a problem can occur.

  1. If the node Runner decides to close down, a more severe damage will be done to stakers than to the runner.

  2. If the performance of the node goes down and/or people decide to withdraw their unlocked tokens it will eventually cause the node to no longer be an eligible full node. Then Point 1 will follow.

  3. People will have to keep their tokens on the node, as the node Runner will threaten to kill his/her node.

Long story short, will there be a fixed ratio requirement for full node runners? Where a full node owner has to provide at least the same amount of tokens as delegators (or even more)?

Will it be possible to move your existing locked TRAC to another full node taking over the job of providing challenges or are they locked until the service period is over? (Same concept as neighborhood, you can move your locked tokens to another neighbor, providing better service)

branarakic commented 1 year ago

Hi Tracers,

Thanks again for the incredible feedback and support for this RFC. Due to the interest we've held a special OT-RFC-14 focused AMA in the OriginTrail Discord where we've addressed most of the questions posted here, and some more. Feel free to read through the discussion there.

The team is already moving towards the implementation of the specification in the RFC, which will yield additional details on the specific parameters and implementation mechanisms in the form of code and documentation in the respective repos.

With that we will be closing this discussion.

Cheers!