Closed Swader closed 5 years ago
@JustinDrake mentioned something curious in his talk at Devcon - that Infura would be running beacon nodes
I don't remember saying this, certainly not in the VDF/randomness talk :)
I don't remember saying this, certainly not in the VDF/randomness talk :)
I could be wrong - I think it was in Q&A maybe, I'll check when the official video with solid Q&A is up, but I was running on fumes and may have mixed something up, in which case I apologize. I'll get to the bottom of it and find you a direct quote from whoever said it, I'm 100% certain I heard it there because it really hit me and woke me up.
Any more get figured out for this?
I don't see why not take the VIPnode (full node incentive) approach of charging validators a dust amount for connecting? Add slots to beacon nodes, validators commit some eth to slot to reserve it, and once time elapses they have to renew or else they're not guaranteed access to the beacon node any more. If they want to exit from the commitment sooner, they lose 10% of their stake to the beacon node they were attached to (early cancellation fee), otherwise they stay in for as long as they paid for.
This has two benefits:
Any obvious downsides apart from making beacon nodes DDoS targets?
Needs cost/benefits analysis using numbers - this will cut into validators profits and if low already then... And there's disadvantages / exclusionary factors to running a beacon node.
Is there any issue with conflict of interest for beacon node operators also validating?
I think there is a bit of confusion here.
Validators will almost always run their own node. Their node will consist of a beacon chain plus any shards they are currently assigned to. A validator not running their own node and just plugging into some service should be considered highly insecure and not a good practice.
Some validators might even choose to run multiple nodes in multiple locations and talk to them all to enhance their security depending on their own needs.
Also remember that slashing is proportion to the number of faults recently in the network. If you connect to remote nodes that have a high number of validators connecting to, then you greatly increase the amount of capital you have at risk of being slashed in the event that this super node is hacked.
I expect validating setups to be highly diverse, but this proportional slashing mechanism will dissuade validators from all using the same solution.
Right - there certainly is confusion here... Are these different kinds of node? (Ie. validator and beacon nodes: https://our.status.im/two-point-oh-the-beacon-chain/) Is this all documented and explained somewhere?
Something doesn't sit right here. If we want validators running on smaller, weaker devices, then we're obviously aiming to decentralize staking and network security across their globe - including those in 2.5th and maybe 3rd world countries. They will almost certainly not be running Beacon nodes to connect their validator(s) to (cost, bandwidth, network stability) especially because privacy and ddos resistance goes out the window then.
The whole point as I understood it was to have Beacon nodes provided by someone and others who want to validate to connect to them. This results in a public service provider and private validators as service consumers. If I'm running my own Beacon node to power my own validators clients (if I can afford to run a Beacon node) then validator privacy means nothing because someone taking down my Beacon node takes down my validators too. There should absolutely be a way to whitelist beacons and to connect to them round-robin or fallback-style, and if one of them is the validator's own node, that's fine.
It just doesn't feel attack resistant enough to have the validator run everything - not just their own beacon, but their own VDF ASIC too. It's a technical mess that excludes citizens from validation imo.
Edit for @crsCR: A discussion about node types and something else
Sent with GitHawk
This node infrastructure will obviously be the backbone of the PoS blockchain network - figuring it shouldn't be left till later... Consensus on design needs gathering ASAP as community are already being informed/misinformed (however well intentioned) regarding - eg. Bruno's efforts and https://www.reddit.com/r/ethereum/comments/9plhgu/should_we_be_discussing_the_economics_behind/ & https://www.reddit.com/r/ethereum/comments/a0wau3/how_can_i_best_be_ready_for_staking/
Realistic hardware, network, staffing resource requirements need deciding on for types of node (which seem to still need deciding on themselves) along with consideration/cross-referencing for and with economic, security (eg. want to use hardware key management = vps & cloud providers no good), ethical factors, etc. What is practical and what will be effective?
Validator's do not need to run a VDF ASIC. If we do utilize VDFs, the solvers of the VDFs can be an entirely distinct set (although I imagine some validators will run the ASIC). Solutions are provided to the protocol and validators (and node operators in general) only need to verify the solutions.
If you are staking with 32+ ETH, you are responsible for getting trustable data for you to sign and broadcast. This is part of the requirements of being a validator. The most straight forward way to get this information is to run a node. You could also pay someone else for the data, but you are taking on serious counterparty risk.
...Don't need to but may want to - https://youtu.be/QDwaAnhSJk8?t=3971
[From that vid... Hmmm - so min 3% of (current) total ETH supply will need to be staked initially to get things rolling and this will then effectively be locked for 1-2 years until sharding goes live.]
So there's the incentive for running beacon nodes then - all validators will want to run them...
Have requirements such as hardware (cpu/ram/harddrive capacity and I/O ability), network (latency, bandwidth, uptime), security (privacy, DDOS) for a beacon node been roughly calculated yet? What kind of operations will those running a node have to carry out - trivial or?
Not sure on scope of testnet but maybe this stuff needs consideration for: https://github.com/ethereum/eth2.0-specs/issues/233
I don't remember saying this, certainly not in the VDF/randomness talk :)
I could be wrong - I think it was in Q&A maybe, I'll check when the official video with solid Q&A is up, but I was running on fumes and may have mixed something up, in which case I apologize. I'll get to the bottom of it and find you a direct quote from whoever said it, I'm 100% certain I heard it there because it really hit me and woke me up.
Mentioned in here: https://github.com/ethereum/pm/blob/master/All%20Core%20Devs%20Meetings/Eth1x%20Sync%201.md
Not sure on scope of testnet but maybe this stuff needs consideration for: #233
Yeah, this would be a great metric to measure in a testing environment. Please join the call on Thursday to discuss.
My point being that beacon nodes will in effect form the lowest tier of node minimum requirements - as virtually all validators will need/want to run them due to counter-party risk. (Can Rocketpool skirt this issue by acting as a trusted source of data? Hmmm.)
It seems like this fact may already be taken for granted though by those with more direct insight into plans and structure - I'm not sure. However, I do know info' on such that's being presented to community is muddled/confusing.
Anyways, if a test doc similar to this is to be drawn up then I'd hope the following kind of requirements will be details - with plans to analyse further:
Eg. hardware (cpu/ram/harddrive capacity and I/O ability), network (latency, bandwidth, uptime), security (privacy, DDOS) for a beacon node. What kind of operations will those running a node have to carry out - trivial or?
...Along with realistic consideration/cross-referencing for and with economic, security, logistics, socio-ethical factors, etc. What is practical and what will be effective? (It's no good things just works on a testnet of enterprise spec servers out of datacentres (or even only using cloud providers) - not with so many validators required.)
My point being that beacon nodes will in effect form the lowest tier of node minimum requirements - as virtually all validators will need/want to run them due to counter-party risk. (Can Rocketpool skirt this issue by acting as a trusted source of data? Hmmm.)
It seems like this fact may already be taken for granted though by those with more direct insight into plans and structure - I'm not sure. However, I do know info' on such that's being presented to community is muddled/confusing.
Anyways, if a test doc similar to this is to be drawn up then I'd hope the following kind of requirements will be details - with plans to analyse further:
Eg. hardware (cpu/ram/harddrive capacity and I/O ability), network (latency, bandwidth, uptime), security (privacy, DDOS) for a beacon node. What kind of operations will those running a node have to carry out - trivial or?
...Along with realistic consideration/cross-referencing for and with economic, security, logistics, socio-ethical factors, etc. What is practical and what will be effective? (It's no good things just works on a testnet of enterprise spec servers out of datacentres (or even only using cloud providers) - not with so many validators required.)
Yes, I completely agree. These are the exact specifications I'm trying to identify based on community input so we can provision an appropriate testing environment. We're moving the call to Friday, but please join because input like this is exactly what I've been looking for.
I’m sure you can find others that can speak far more knowledgeably on such stuff - I know a bit but not a lot. (Just enough to be concerned - though I’d like to see a healthy level of inclusion re. who can validate, misinformation and lack of consideration won’t help anyone.)
Besides, that’s 10.30pm for me so...
I don't think this is being discussed enough. On the latest 2.0 call, it was briefly touched on but all the discussion seems to have amounted to is "We hope it won't be a problem", which to me seems like the exact opposite of how things usually work in Eth.
I would like to once again bring attention to this and ask for feedback from @vbuterin, @djrtwo and @JustinDrake who seem most knowledgeable on the topic.
My main concern is the creation of SaaS beacon nodes providing data to validators. These SaaS nodes can charge for this service if not directly incentivized by the network (the topic of this issue), but they centralize the new Ethereum heavily into Infura-like entities again. We could additionally incentivize validators to run their own beacon nodes, so that they get lower latency or whatnot, but this destroys the concept of validator privacy because attacking a validator's own beacon node (which is public) == attacking their validators, at least DoS-wise.
An alternative in this case would be having a fallback list of nodes provided by others to which the validator can connect in case the primary beacon node dies and thus would keep its liveness, preventing slashing despite the beacon node being DoSed, but there's an additional attack vector there in that everyone would probably have the same fallback list and thus in the event of network-wide DoS on a specific client, everyone would auto-switch to something like Infura, which is a problem if Infura can't handle it, is down, or is compromised itself and turned into a saboteour.
One way around this is implementing a way to whitelist a set of nodes that validators grab data from, go through them round-robin seeded by the RANDAO randomness, and make sure that every time the primary beacon node dies, the next one selected is truly random, but this again throws the concept of validator privacy away, and brings into question incentivization of beacon nodes.
In any case, I think this warrants further discussion and would kindly ask for feedback or clarification if I'm getting something wrong.
Further discussion here: https://www.reddit.com/r/ethereum/comments/abrfz3/beacon_nodes_and_incentivization/
Excellent analysis on the topic of profitability: https://tokeneconomy.co/validator-economics-of-ethereum-2-0-part-one-bc188173cdca
I'm tempted to steer this discussion towards ethresear.ch, where this (non-trivial!) topic has been raised. There isn't much that is addressable here, especially for phase 0. The protocol does incentivise validators to run beacon nodes, and there are other external incentives to run beacon nodes. Feel free to reopen :)
@JustinDrake can you link the relevant etheresearch topic here please?
Also pinging @Mikerah and @crsCR for easier teleportation.
Those links on etheresearch might have similar titles but they aren't active or as encompassing and, I believe in part, actually lead to this thread being posted here - for info' and discussion on the wider issue... Why it's a good idea to 'steer away' to there and why much isn't 'addressable' here, I don't know.
Eg, here, I see it's OK for this to be given as a statement of fact but with no elaboration (even when following requests for clarification):
The protocol does incentivise validators to run beacon nodes, and there are other external incentives to run beacon nodes
How? Adequately?
Anyways, you can only ask questions so many times without coming to the conclusion that a lack of answering means the response is in someways negative... Sorry, but I don't think relying on blind faith and altruism is going to suffice - if that's the plan. (Nobody seems to want to properly explain a different plan - if there is one.)
As a practical example, I run a node now but I don't hold any ETH and won't buy any to allow my validating PoS unless I judge it to be safe, relatively economically viable and ethically sound. Sooner or later, the realities (whatever they are) of validating the proposed ETH PoS will take effect - better sooner, IMO, for everyone's sake.
Yes, this is quite unsatisfactory and disappointing. I'm getting a "heads buried in sand" vibe, too. This stonewalling has now been going on for months now. I'll continue pursuing it in other venues - individual dedicated posts and such, pointing to this issue as the root discussion. Perhaps someone like @sassal from EthHub with their economics calculations can also chime in and we can go into detail collectively on our own.
@crsCR, @Swader: Reopening for you guys :)
Closing. We can reopen when we have more concrete data on node requirements.
A super dumbed down way of thinking about beacon nodes vs validator clients is geth and geth attach.
In that regard, a beacon node (which will bear the bulk of work in the beacon chain) will be passing data onto validators who do their magic and pass it back. The incentive for running validators (which can "attach" to a local beacon node, or a hosted one) is clear - stake is in the system, and "block rewards" are added to a validator's balance (though I think it should be a separate balance, not to increase the vote and power of a validator the longer they are in the system). But what is the incentive for running beacon nodes if they can run without validators and (anyone's) validators can just connect to them?
@JustinDrake ~mentioned something curious in his talk at Devcon - that Infura would be running beacon nodes~ (looks like maybe it wasn't him? Anyone remember who it was?). I understand from my discussion with @djrtwo that he may have meant superfull archive nodes that can serve rent-expired restoration purposes and also sync all shards all the time, and validators would then be connecting to such nodes, but regardless of the type of node that was implied there for beacon nodes, I don't see the incentive for running beacon nodes + validators as opposed to just validators and hooking onto someone else's beacon node. I see this as a replay of the problem we have now - too few people running full nodes with LES slots for light clients to attach to.
So I'd like to open up a discussion on beacon node incentivization. Am I missing something, has this been discussed somewhere? If so, I would appreciate if someone could point me to the relevant discussion.
References: