ethereumclassic / ECIPs

https://ecips.ethereumclassic.org
81 stars 61 forks source link

ECIP-1049: Change the ETC Proof of Work Algorithm to Keccak-256 #13

Closed p3c-bot closed 3 years ago

p3c-bot commented 5 years ago

Recent thread moved here (2020+)


lang: en ecip: 1049 title: Change the ETC Proof of Work Algorithm to the Keccak-256 author: Alexander Tsankov (alexander.tsankov@colorado.edu) status: LAST CALL type: Standards Track category: core discussions-to: https://github.com/ethereumclassic/ECIPs/issues/13 created: 2019-01-08 license: Apache-2.0

Change the ETC Proof of Work Algorithm to Keccak-256

Abstract

A proposal to replace the current Ethereum Classic proof of work algorithm with EVM-standard Keccak-256 ("ketch-ak")

The reference hash of string "ETC" in EVM Keccak-256 is:

49b019f3320b92b2244c14d064de7e7b09dbc4c649e8650e7aa17e5ce7253294

Implementation Plan

Motivation

Rationale

Reason 1: Similarity to Bitcoin

The Bitcoin network currently uses the CPU-intensive SHA256 Algorithm to evaluate blocks. When Ethereum was deployed it used a different algorithm, Dagger-Hashimoto, which eventually became Ethash on 1.0 launch. Dagger-Hashimoto was explicitly designed to be memory-intensive with the goal of ASIC resistance [1]. It has been provably unsuccessful at this goal, with Ethash ASICs currently easily available on the market.

Keccak-256 is the product of decades of research and the winner of a multi-year contest held by NIST that has rigorously verified its robustness and quality as a hashing algorithm. It is one of the only hashing algorithms besides SHA2-256 that is allowed for military and scientific-grade applications, and can provide sufficient hashing entropy for a proof of work system. This algorithm would position Ethereum Classic at an advantage in mission-critical blockchain applications that are required to use provably high-strength algorithms. [2]

A CPU-intensive algorithm like Keccak256 would allow both the uniqueness of a fresh PoW algorithm that has not had ASICs developed against it, while at the same time allowing for organic optimization of a dedicated and financially committed miner base, much the way Bitcoin did with its own SHA2 algorithm.

If Ethereum Classic is to succeed as a project, we need to take what we have learned from Bitcoin and move towards CPU-hard PoW algorithms.

At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. - Satoshi Nakamoto (2008-11-03) [3]

Note: Please consider this is from 2008, and the Bitcoin community at that time did not differentiate between node operators and miners. I interpret "network nodes" in this quote to refer to miners, and "server farms of specialized hardware" to refer to mining farms.

Reason 2: Value to Smart Contract Developers

In Solidity, developers have access to the keccak256() function, which allows a smart contract to efficiently calculate the hash of a given input. This has been used in a number of interesting projects launched on both Ethereum and Ethereum-Classic. Most Specifically a project called 0xBitcoin [4] - which the ERC-918 spec was based on.

0xBitcoin is a security-audited [5] dapp that allows users to submit a proof of work hash directly to a smart contract running on the Ethereum blockchain. If the sent hash matches the given requirements, a token reward is trustlessly dispensed to the sender, along with the contract reevaluating difficulty parameters. This project has run successfully for over 10 months, and has minted over 3 million tokens [6].

With the direction that Ethereum Classic is taking: a focus on Layer-2 solutions and cross-chain compatibility; being able to evaluate proof of work on chain, will be tremendously valuable to developers of both smart-contracts and node software writers. This could greatly simplify interoperability.

Implementation

Example of a Smart contract hashing being able to trustlessly Keccak-256 hash a hypothetical block header. example

Here is an analysis of Monero's nonce-distribution for "cryptonight", an algorithm similar to Ethash, which also attempts to be "ASIC-Resistant" it is very clear in the picture that before the hashing algorithm is changed there is a clear nonce-pattern. This is indicative of a major failure in a hashing algorithm, and should illustrate the dangers of disregarding proper cryptographic security. Finding a hashing pattern would be far harder using a proven system like Keccak-256:

example

Based on analysis of the EVM architecture here there are two main pieces that need to be changed:

  1. The Proof of work function needs to be replaced with Keccak-256
  2. The Function that checks the nonce-header in the block needs to know to accept Keccak-256 hashes as valid for a block.

example

A testnet implementing this ECIP, is currently live, with more information available at Astor.host

References:

  1. https://github.com/ethereum/wiki/wiki/Dagger-Hashimoto#introduction
  2. https://en.wikipedia.org/wiki/SHA-3
  3. https://satoshi.nakamotoinstitute.org/emails/cryptography/2/
  4. https://github.com/0xbitcoin/white-paper
  5. https://github.com/EthereumCommonwealth/Auditing/issues/102
  6. https://etherscan.io/address/0xb6ed7644c69416d67b522e20bc294a9a9b405b31

Previous discussion from Pull request

example example example example example example

p3c-bot commented 5 years ago

Work has officially begun on Astor testnet - a reference implementation of an Ethereum Classic Keccak256 testnet. Any help is appreciated.

Astor Place Station in New York is one of the first subway stations in the city, and we plan the testnet to be resiliant, while also delivering far increased performance by changing out the overly complicated Ethash proof of work algorithm.

realcodywburns commented 5 years ago

"I think the intent of this ECIP is to just respond with an ECIP because the ECIP knowingly isn't trying to solve the problems of the claimed catalyst (51 attack). ETC can change it's underwear in some way but it has to have some type of super power than 'just cause'. I reject." - @stevanlohja https://github.com/ethereumclassic/ECIPs/pull/8#issuecomment-461321539

Harriklaw commented 5 years ago

First and most crucial question : Do we need an algo change? How an algo change could help us?For me there are two aspects that should be examined at the same time. The first one, is how much secure is the new POW vs the old one. As you nicely wrote,any well examined algo as keccak256 is both scientifically reviewed and as the successor of SHA2 has high propability to succeed as SHA2 did with bitcoin. This can be controversial tho, so this article can strengthen the pros of keccac it is considered that may be quantum resistant: https://eprint.iacr.org/2016/992.pdf "Our estimates are by no means a lower bound, as they are based on a series of assumptions. First, we optimized our T-count by optimizing each component of the SHA oracle individually, which of course is not optimal. Dedicated opti- mization schemes may achieve better results. Second, we considered a surface code fault-tolerant implementation, as such a scheme looks the most promising at present. However it may be the case that other quantum error correcting schemes perform better. Finally, we considered an optimistic per-gate error rate of about 10^−5 , which is the limit of current quantum hardware. This number will probably be improved in the future. Improving any of the issues listed above will certainly result in a better estimate and a lower number of operations, however the decrease in the number of bits of security will likely be limited" The second aspect we should examine is how the algo change will influence decentralization and this topic is more controversial. As economics are the most decesive factor for ASIC development ,(assuming that ETC will be valuable ),that will lead to new asics very soon. For me the real question is : how soon? And the answer is clearly hypothetical. Why this is a crucial question? First of all if already asics exist that would be unfair and centralized for the interval that new companies find and evolve their own heardware. If this is not the case, companies that already produce sha2 and other CPU intensive algos asics will eventually produce sha3 very fast as they already have the "know how " and have learnt how to adapt in this hardware/algo chase game very well. But do we want that? Do we want big asic companies to have the upper hand on ETC mining hardware production?If we accept decentralization is already well established among the crypto hardware industry( meaning asic companies)and many companies already joined the space ,then decentralization for sha3 will be achieved soon. But if we accept that gpu industry is a better space for our community (for decentralization purposes) then we should consider that any kind of algo change to cpu intensive algo will provoke massive change to our miners and mining ecosystem. Ethash compared to keccak is memory intensive ,and gpus are pretty much compatitive to asics right now: 1)efficiency: rx 580 =3.33 w/mh and a10= 1,75, 2) 2)price : rx 580 =150$ ( 5$/mh) and a10= 5600$ ( 11$/ mh) So the real question is pretty much equal to this: cpu intensive vs memory intensive?gpus+ asics or asics? btc or etc is more decentrilized ? I think as for now gpus+ asics in ethash ecosystem make a helaty environement for decentralization hash power. Although btc seems to be well decentralized too. Conclusion: for me an algo change will be profitable long term as keccak256 seems to be superior than Ethash in terms of security. Nevertheless, ethash seems to be superior in terms of decentralization. Short term we should consider other ways to reduce the risk for a future "51% attack" and allow the crypto mining industry to mature more. That would lead to a more decentralized mining hardware industry and consort with our optimal mining vision of a better decentralized eco.

p3c-bot commented 5 years ago

Thank you for your post @Harriklaw. The plan for this switch is to create a SHA3 testnet first, for miners and hardware manufacturers to use, become comfortable with, and collect data on. Once we start seeing Flyclients, increased block performance, and on-chain smart contracts that verify the chain's proof of work, the mining community will see the tremendous value of this new algorithm and support a change.

RE: decentralization. I consider Ethash to already be ASIC'd, and as ETC becomes more valuable it will be less possible to mine it from a GPU anyway. The concern is that right now, Ethash is so poorly documented, only 1 or 2 companies knows how to build a profitable ASIC for it. However, with SHA3, it is conceivable that new startups, and old players (like Intel, Cisco, etc.) would feel comfortable participating in the mining hardware market since they know the SHA3 standard is transparent, widely used, and has other uses beyond just cryptocurrency.

SHA3 has been determined to be 4x faster in hardware than SHA2, so it is conceivable an entirely new economy can be created around SHA3 that is different than SHA2, similar to how the trucking market has different companies than the consumer car market.

saturn-network commented 5 years ago

Re: Quantum resistance of hash functions

  1. By the time it is possible to build a quantum computer that can crack keccak256 (sha3) there will be another generation or two of hash functions (think sha4 and sha5).
  2. Elliptic curve cryptography in Ethereum's private/public keys (for the vast majority of cryptocurrencies, really, including ETH BTC ETC TRX...) will be cracked much sooner than that. Who cares about mining crypto when you can literally steal other people's money (i.e. steal Satoshi's bitcoin).

I do not think we should worry about quantum resistance in this ECIP.

saturn-network commented 5 years ago

@p3c-bot frankly, we might even see sha3 ASICs embedded in desktop and mobile processors. In fact, SHA256 already has optimized instructions on ARM and Intel. Chances of Ethash instructions in ARM and Intel are slim to none at this point.

zmitton commented 5 years ago

In the process of creating an ETC FlyClient, I have run into major blockers that can be eliminated if 1049 (this ECIP) is adopted.

Basically verification right now, cannot be done without some serious computation. The main issue is Ethash requiring the generation of a 16mb pseudorandom cache. This cache changes about once a week, so verifying the full work requires doing it many times. I have touched many creative solutions to this, but I believe we are stuck at light-client verification taking at least 10 minutes on a phone.

By contrast, with this ECIP, plus FlyClient (ECIP-1055), Im confident full PoW can be done in less than 5 seconds. This would open the door to new UX design patterns.

p3c-bot commented 4 years ago

This standard uses the following Keccak256 control hash - if a device can produce this hash it will work for ECIP1049:

keccak256("ETC")= 49b019f3320b92b2244c14d064de7e7b09dbc4c649e8650e7aa17e5ce7253294"

control
AndreaLanfranchi commented 4 years ago

In the current Ethash system, the mixHash is a 256-bit string constructed based on the state of the blockchain. This is concatenated with the nonceHeader, 64-bit, and the entirety (320-bits) of it is hashed to verify proof of work.

Not completely accurate :

  1. Miners receive the header hash which is a hash of candidate block state (not the state of the chain)
  2. Header hash is combined with nonce to fill the initial state of keccak function
  3. Initial state goes through a first round of Keccak
  4. Generated (from point 3) mix is FNV'ed against 64 pseudo random accesses to DAG
  5. Output is then copied into state and processed through an additional round of Keccak
  6. Resulting dwords[0-3] are checked against target

For this proposal we recommend miners being able to fill the mixHash field with whatever data they desire. This will allow for a total of 320-bits for miners to use for both submitting proof of work, but also to signal mining pools and voting on certain ECIP proposals.

Unless I miss something how the proof of work is supposed to be verified ? This should imply sending the work provider (the node or pool) full initial mix (as composed by miner) plus both the final target and the final state of keccak: by consequence network traffic among work-consumers (miners) and work-providers (node/pools) is more than quadrupled with non trivial problems especially on pool's sides.

AndreaLanfranchi commented 4 years ago

@p3c-bot

The concern is that right now, Ethash is so poorly documented, only 1 or 2 companies knows how to build a profitable ASIC for it.

The "lack" of documentation for ethash is pure fallacy. The algorithm is as well documented as it relies on the same SHA3: thus if enough documentation on SHA3 then enough documentation on ethash where the "only" addition is DAG (generated also using SHA3) and DAG accesses. Its all described here https://github.com/ethereum/wiki/wiki/Ethash

Anyone with basic programming skills can build a running implementation in the language they prefer.

ASIC makers never had problems in "understanding" the algo (which also has a widely used open-source implementation here https://github.com/ethereum-mining/ethminer) and there is no "secret" behind. The problem of ASICs has always been how to overcome the memory hardness barrier: but this has nothing to do with the algo itself rather with how ASICs are built.

P.s. Before anyone argues about SHA3 != Keccak256 please recall that Keccak allows different paddings which do not interfere with cryptographic strength of the function. SAH3 and Keccak246 (in ethash) are same keccak with different paddings.

zmitton commented 4 years ago

Agree that ethash being undocumented is not the best argument. It is however, significantly more complex (being a layer atop keccak256).

A bigger problem is that it doesn't achieve its intended goal of ASIC resistance or won't for much longer (as predicted here)

Also it is incredibly easy to attack since there is so much "dormant" hash power.

AndreaLanfranchi commented 4 years ago

@zmitton I think the DAG layer is really simple instead, but is my opinion.

I think we may agree that "ASIC resistance" is not equal to "ASIC proof". Giving the latter is utopia (provided there are enough incentives) I think ethash is still the best "ASIC resistant" algo out there: efficiency increases (nowadays) are stil in the range of less than two digits. Its resistance is inversely proportional to on-die memory cost for ASICs. That's it. That's why for ethereum has been proposed an alterative (which I won't mention) to further increase memory hardness.

"Dormant" hashpower is not an issue imho and don't think is enough to vector an attack given the fact is still predominantly GPU (yet not for long).

zmitton commented 4 years ago

(cross posting as i see discussion section has changed):

I have a low-level optimization for the ECIP. It would be preferable to use the specific format (mentioned to Alex at the summit)

// unsealedBlockheader is the blockheader with a null nonce
digest = keccak256(concat(keccak256(unsealedBlockheader),32ByteNonce))
// a "winning" digest is of course the thing that must start with lots of leading zeros
// the "sealed" header is then made by inserting the nonce and re-RLP-encoding
AndreaLanfranchi commented 4 years ago

Also selfish mining could become an advantage strategy since block headers can vary in size and larger headers would then take longer to mine on.

Not sure what you mean here: block header is a hash with fixed width.

zmitton commented 4 years ago

larger input size, not output

AndreaLanfranchi commented 4 years ago

Can't code, so I am forced to rely on the trusted third party devs and documentation as to the security of SHA3.

To be extremely clear SHA3 has been in ethash algorithm since its birth. Sha3 in ethash is called Keccak but the two terms are synonims. There is a slight difference SHA3 vs Keccak due to padding of output but the two functions are the same and rely on the same cryptographical strength.

Ethash algorithm is : Keccak256 -> 64 rounds of access to DAG area -> Keccak256.

This proposal introduces nothing new unless (but is not clearly stated) is meant to remove the DAG accesses and eventually reduce Keccak rounds from 2 to 1. I have to assume this as the proponent says Ethash (Dagger Hashimoto) is memory intensive while SHA3 would not be,

Under those circumnstances the new proposed SHA3 algo (which is wrong definition as SHA3 is simply an hash function accepting any arbitrary data as input - to define an algo you need to define how that input is obtained) the result would be;

BelfordZ commented 4 years ago

I will never support a mining algorithm change, regardless of technical merits.

I also refuse to spend more time above writing this comment on the matter. I have read all of the above discussion, reviewed each stated benefit and weakness, and thought long and hard about as many of the ramifications of this as possible. While each benefit on its own can be nitpicked over, having it's 'value added' objectively disseminated, there is 1.5 reasons that trump all benefit. It's an unfortunate reality of the world and humanity.

The main point is that ruling out collusion being a driving force behind any contribution is impossible. This is especially true the closer the project gets to being connected with financial rewards. Every contribution has some level of Collusive Reward Potential. A change that adds a new opcode has a much higher CRP than fixing a documentation typo. Ranking changes with the highest CRP, my top 3 would be:

  1. Mining algorithm changes ('fair launch' being the oxymoron that we would be, for the 2nd time, subjected to)
  2. Consensus changes (blacklisting addresses, dependence on anything even remotely centralized for block validation)
  3. Protocol defined Peering rules (ie drop a peer if they support protests in HK type of rules)

So, going back to the 1.5 reasons that trump all...

1 - To explain by counterposition, let's assume I was a large supporter of a mining algo change. What's to say I've not been paid by ASIC maker xyz to champion this change, giving them the jump on all other hardware providers?

Spoiler: nothing.

0.5 - How can something which is designed to be inefficient be changed in the name of efficiency WITHOUT raising suspicion?

Spoiler: it can't.

To conclude, this might be a great proposal... for a new blockchain... And I urge you to reconsider this PR, as I believe there are more useful ways of spending development efforts.

drd34d commented 4 years ago

I share a similar opinion as @BelfordZ on this subject.

Motivation A response to the recent double-spend attacks against Ethereum Classic. Most of this hashpower was rented or came from other chains specfically Ethereum (ETH). A seperate proof of work algorithm would encourage the development of a specialized Ethereum Classic mining community, and blunt the ability for attackers to purchase mercenary hash power on the open-market.

"most of this hashpower was rented.." - what's the source of this assessment?

"would encourage the development of a specialized Ethereum Classic mining community" - a new and specialized mining community sounds like we could be talking about a newer and smaller community and probably less security?

The risk is too high and the threat isn't exactly there. A double spend attack as you know is not exactly direct attack to the network but to the participants who do not take the necessary precautions (confirmation time). I have to admit, though, that the current recommended confirmations for bigger transactions are nerve-racking.

phyro commented 4 years ago

Here's my current view on this proposal. This won't solve 51% attacks because they can't really be solved. I do agree that having a simpler/minimalistic and more standard implementation of hashing algorithm decreases the chances of a bug being found (yes, ethash working for the last 5 years tells nothing, we've seen critical vulnerabilities discovered in OpenSSL after more than 10 years). On the other hand, we have no guarantee ETC will end up as the network with the most sha3 hash rate. Even if we did in the beginning, it doesn't mean we can sustain the first place. If we fail to do that, it's no different than ethash from this perspective. The second advantage that sha3 has over ethash is faster verification which enables things like FlyClient implementation (light clients for mobile that can verify the chain is connected without downloading all the block headers). I was talking to @Wolf-Linzhi about this and maybe there could be ways to make verification easier by modifying the epoch lengths or whatever to make the verification faster. At the time we did not know and I still don't, just saying that maybe there are modifications that make verification faster on ethash.

The last thing I want to mention is that making an instant switch actually opens up an attack vector. Nobody knows what hash rate to expect at the block that switches to sha3. This means that exchanges should not set the confirmation times to 5000 but closer to 50000. This makes ETC unusable and we should avoid such cases if possible. In case the community agrees to switch to sha3, we should consider having a gradual move to sha3 where we set difficulties accordingly so that 10% of blocks are mined with sha3 proof and 90% with ethash. Over time the percentage goes in favor of sha3 for example after half a year it is as 20% for sha3 etc. This makes the hash rate much more predictable compared to the instant switch and the exchanges would not need to increase the confirmation times to protect themselves because of the unknown hash rate. I don't know whether this is easy to implement, I imagine we could have 2 difficulties (each hashing algorithm having its own) but I'm not that familiar with the actual implementation and possible bad things this could bring.

soc1c commented 4 years ago

sha3 is more efficient to implement in asics. also, sha3 is faster than blake2b.

ASIC implementations of BLAKE can process about 12.5 GB/s, compared to Keccak, which can process about 44 GB/s. An ASIC implementation of BLAKE2 would be probably about 40% faster than a similar implementation of BLAKE, i.e. around 17.5 GB/s.

See also https://crypto.stackexchange.com/a/31677

zmitton commented 4 years ago

Also @iamjustarandomguy keccak works natively in the EVM, which would be good for interoperability/L2, and using 1 algo vs 2 is more secure due to weaker assumption (that keccak works as intended, rather than that both keccak and blake2b work as intended)

zmitton commented 4 years ago

I agree with a lot of belfords sentiments above, and Donald's earlier calling it "the nuclear option", but I still support 1049 because we have a real dire security concern. It is currently so easy for more 51% attacks to occur, and such attacks (if repeated a couple more times), will likely kill the chain nearly completely as Alex has pointed out at the summit.

Statistical finality is difficult enough to engineer around, but "adversarial finality" is untenable. Exchanges will begin to drop ETC for risk of what we have already seen. Individuals would not be wise to accept more than very small amounts of ETC as payment.

1049 would not completely solve 51% attacks, however, it would allow cleaver software to be able to predict them: The specific attack we saw is something I'll call a "surprise fork", that is a new chain fork that is longer. There was no evidence of the surprise fork until the moment it was published. If this attack is attempted under 1049, there will likely be evidence beforehand. why?

As soon as ASICs exist for keccak256, they are incentivised to be turned on ASAP, and to continue mining around the clock with little downtime. Because of this dynamic, it is likely that the majority of all keccak ASICs that exist, will be mining ETC at all times. This is true currently for Bitcoin. At this very moment, likely greater than 90% of all sha256 ASICs hardware in exisntence are currently mining Bitcoin.

Note that there is no easy way to suddenly attain more hashpower without building more chips. Also note that the total hashpower in existence can conveniently be calculated (as difficulty/blocktime) since it is all being directed at, and published to, its respective chain.

So in order to create a surprise fork, hashpower would have to be borrowed from the main chain, where it can be noticed as an unhealthy sudden (~50%) drop in hashpower. A simple light client would have this information readily available (including superlight flyclients), and could warn users that there is likely an attack underway. Exchanges could automatically halt ETC trading/withdraws in this scenario.

Likely the attack won't even be attempted once this mitigation strategy is implemented, as it is very costly, and accomplishes nothing.

phyro commented 4 years ago

good point @zmitton . I have two questions around this:

  1. you can't really know what the full hash rate of ASICs is at the beginning, what do exchanges do?
  2. you're assuming ASICs production will be linear when in reality it could be that we get first N ASICs produced and then 5*N in a single batch which eliminates the guarantee you gave above. What happens in this case? (this becomes less of a threat over time of course)

Also, this strategy assumes you are the dominant chain for the sha3 algorithm. I don't believe you get dominance by being the first one because the main incentive miners have is making money, most of them prioritize this over which chain they mine (of course not all of them, but the majority of miners needs to do this to survive). In the long run, I think utility brings the real value and with the value comes the price and hence hash rate. I believe our hash rate dominance depends more on the utility than on the hashing algorithm.

zmitton commented 4 years ago

Even if it rises quickly, that's ok. It is a drop in hashpower that should be perceived as unhealthy. To get around this, the attacker would have to control the majority of new ASICs before they come online (not impossible but much more economic out-of-band commitment than the previous 51% attack).

In reality ASIC production does not come online all at once. Especially as hashrates stabilize this gets hard and harder because the supply of new unused ASICs is unlikely to dwarf current live hashpower.

stevanlohja commented 4 years ago

REJECT

Reasoning:

The motivations are not probable arguments that Sha3 is needed.

A response to the recent double-spend attacks against Ethereum Classic. Most of this hashpower was rented or came from other chains, specfically Ethereum (ETH). A seperate proof of work algorithm would encourage the development of a specialized Ethereum Classic mining community, and blunt the ability for attackers to purchase mercenary hash power on the open-market.]

Proof of Work consensus is based on the 51% consensus rule. Therefore, this reaction offers no solution and the authors and champions of this proposal have a footprint of not believing this motivation.

As a secondary benefit, deployed smart contracts and dapps running on chain are currently able to use keccak256() in their code. This ECIP could open the possibility of smart contracts being able to evaluate chain state, and simplify second layer (L2) development.

It sounds expensive and largely not an in-demand feature. You can evaluate an EVM-based blockchain with traditional analytic libraries and software.

The rationales are not probable arguments that Sha3 is needed.

A CPU-intensive algorithm like Keccak256 would allow both the uniqueness of a fresh PoW algorithm that has not had ASICs developed against it, while at the same time allowing for organic optimization of a dedicated and financially commited miner base, much the way Bitcoin did with its own SHA256 algorithm.

This is equity and security theatre similar to my ProgPoW arguments https://medium.com/@stevan.blog/progpow-is-not-only-shady-its-baloney-opinion-60b2a6570b1c. The author has stated concern for cheap hardware being rented for malicious intentions. This totally contradicts their concerns by putting us back to cheap hardware. Forks to bully ASICs is not sustainable - we're not Monero.

soc1c commented 4 years ago

we're not Monero

I wish ...

prestwich commented 4 years ago

Hi friends, coming into this as a bit of an outsider. Had a conversation in person with Liz the other day and she pointed me here.

ETC seems to be relying on proof of work to create consensus without fulfilling Nakamoto Consensus's security assumptions. An inherent requirement of PoW consensus is that the heaviest valid chain is expensive to produce. This is not currently true for ETC.

This means that we're missing out on all the benefits of PoW, and getting nothing in return. Our consensus is brittle to outside actors, light client verification is expensive, and interoperability is expensive.

The current situation is untenable

Interestingly, the market will converge to this result as long as ETC and ETH compete for the same hardware. Market equlibrium makes the less-valuable chain insecure whenever 2 or more chains compete for the same hardware. This implies that it is critical not to compete with Ethereum.

The cost of attack on ETC for existing ETH mining pools is measured in the thousands of dollars. I could probably personally afford a deep reorg. The current situation is untenable.

ASIC-resistance considered harmful

All "ASIC-resistant" coins end either with ASICs (ethereum, litecoin, decred, dash, grin, etc.etc.etc.) or with regular hard forks (monero). As @stevanlohja points out, ETC is not Monero, and doesn't want to give devs ongoing control over the PoW algorithm.

The one lasting effect of ASIC-resistance is that light client verification of these chains is inordinately expensive. As @phyro and @zmitton point out, light client verification of ETH and ETC is bizarrely impractical. Beyond the bandwidth costs, the verification cost of ethash is literally millions of times more expensive than verification of SHA2 or keccak256.

This has much wider impact than off-chain verifiers. The other thing it prevents (and why I popped into this thread) is cross-chain communication. Direct communication relies on light client verification of the remote consensus process Verification of keccak256 in EVM costs tens of gas. Verification of ethash costs millions. Cross-chain communication is not currently possible because the PoW is too expensive to verify.

So if the main goal is impossible to achieve, there are negative side-effects, and the pursuit results in either failure or developer capture or the chain, why should we pursue ASIC-resistance?

What does an algorithm change do?

Is keccak256 suitable?

Yes. It is well-known and widely implemented. No major chain uses it. ETC will be the most valuable keccak256 chain. Semi-optimized GPU implementations exist. It is available on most chains we will want to interoperate with. It is extremely unlikely to be broken or backdoored.

Other suitable options might include

Summary

Change algorithms. Keccak256 is one of the better choices available.

pyskell commented 4 years ago

Just want to clarify a few thoughts:

An inherent requirement of PoW consensus is that the heaviest valid chain is expensive to produce. This is not currently true for ETC.

This is true for ETC, nodes follow the heaviest valid chain with its rule set. Though @prestwich's other comments regarding vulnerability to other Ethash pools remains valid regardless.

ETC and ETH compete for the same hardware

ETC, ETH, and all other non-ASIC mined chains compete for the same hardware (GPUs). Excepting for present day market conditions (most purchasable hashrate sticks to one algorithm), and some potential performance advantages for specific GPUs there is materially little that makes various GPU-mined algorithms distinct.

Note: I'm not commenting on the above proposal or any other comments provided.

prestwich commented 4 years ago

An inherent requirement of PoW consensus is that the heaviest valid chain is expensive to produce. This is not currently true for ETC.

This is true for ETC, nodes follow the heaviest valid chain with its rule set. Though @prestwich's other comments regarding vulnerability to other Ethash pools remains valid regardless.

Oh, I mean that PoW assumes that, and only works if, the heaviest chain is expensive to make. And that assumption is not valid for ETC at the moment. So following the heaviest valid chain does not create a reliable consensus in ETC

bobsummerwill commented 4 years ago

I have just learnt that ZCash Foundation held a ballet very similar to what we are talking about now on ETC in June 2018, with the results announced in July 2018:

https://github.com/ZcashFoundation/Elections/blob/master/2018-Q2/General-Measures/embrace_simple_asics.md

Theirs was in June 2018:

https://www.zfnd.org/blog/governance-voting/

Results from July 2018 in favor of that ballot - Votes: Agree 38, Disagree 26

https://www.zfnd.org/blog/governance-results/

"The Foundation should commit to a plan for migrating the Zcash protocol to a new proof of work algorithm with a hard-fork planned between September 30, 2020 and December 31, 2020, with the following tasks: 1) Selecting a thermodynamically efficient (not ASIC-resistant!), currently unused proof-of-work algorithm 2) Hosting and building an open hardware specification for the selected PoW algorithm 3) Assembling a consortium of hardware companies to build hardware against this open specification, while encouraging upstream contributions 4) Building an open source, cross-platform, user-friendly, p2pool-esque piece of mining software for use with this hardware 5) Manage the hard fork upgrade process across users, wallets, exchanges"

bobsummerwill commented 4 years ago

Has anybody seen recent updates on their plans?

prestwich commented 4 years ago

Has anybody seen recent updates on their plans?

Almost everything except the Dev Fund debate is on hold in the Zcash community right now. I asked Josh about this a while back and afaik there's no official plan.

Worth mentioning I'm an author of ZIP-221, which is aimed at making light-client verification of Zcash cheaper via FlyClient-style history commitments

saturn-network commented 4 years ago

Just want to point out that the longer we debate the move to thermodynamically efficient and cryptographically secure SHA3, the less ETC's chances to get the first mover advantage and be the dominant hashpower consumer chain.

https://www.saturn.network/blog/discussing-sha3-for-ethereum-classic/

The best path towards migration is to move once a critical mass of hashrate (potentially with first generation of SHA3 ASICs operating) is available. This can either be bootstrapped via an incentivized testnet (e.g. monitor hashrate and issue bounty for Astor.host miners upon transition), or follow @phyro's gradual EthHash sunset schedule, with both algos running in parallel and slowly shifting block proportions (and thus rewards) from EthHash to SHA3 miners.

Let's just make sure we avoid Verge-like fuckups.

Happy chatting!

phyro commented 4 years ago

Not sure how bounty would work without touching the monetary policy. Perhaps legit transactions/donations? I don't think this would work either tbh as we wouldn't be able to gather enough for it to be relevant.

saturn-network commented 4 years ago

Well, chances are if ETC is to live forever we'll have to change POW hashing function every 50 years or so. Might as well figure out the migration process now, so that when ETC stores billions of dollars and SHA3 gets considered insecure, there is an established path towards SHA4 migration that preserves monetary policy. Bitcoin will have to adapt too in time.

Your gradual dual-hashing proposal is clever. The trick here is to balance different hashrates and difficulty adjustments without introducing unfortunate bugs. Need to learn from multi-algo coins' (e.g. XVG) mistakes.

But before we proceed we must come to terms with the fact that changing hashing algos is a necessary procedure, and migrating to SHA3 soon might be the best choice we have for the next 50 years.

If we miss this window and another coin adopts SHA3 faster we might have to wait until SHA4, and watch other chains enjoy security and flyclients.

phyro commented 4 years ago

Yes, the dual hashing would need to be well thought out, I have not invested time to actually think of the consequences and the actual implementation.

I'm still not convinced that being the first one matters. If we are the first one sure we are leading at the beginning, but as soon as another chain with SHA3 appears, the miners will mine whatever is the most profitable. And I guarantee you that there won't be a single SHA3 chain out there. The fewer chains that share the same algorithm, the less likely a more profitable one exists. If we judge by that, perhaps we should choose something that is proven secure for some time but also less known.

YazzyYaz commented 4 years ago

If we judge by that, perhaps we should choose something that is proven secure for some time but also less known.

SHA3 imo fits the bill. It's well known and secure.

prestwich commented 4 years ago

SHA3 imo fits the bill. It's well known and secure.

prefer keccak256 over sha3, because the EVM has keccak256 but not sha3

YazzyYaz commented 4 years ago

Thanks for correcting me @prestwich I meant to say Keccak256. I went ahead and modified the title of the discussions here to Keccak256.

I also created a little poll to gather feedback from miners on Twitter over their thoughts on this https://twitter.com/Yazanator/status/1194704021065535491

stevanlohja commented 4 years ago

@saturn-network What's the data behind this hasty "window" you are selling? It just sounds like marketing noise to me.

Changing hastily to just be the first doesn't seem like a probable reason to interrupt our current users either.

Wolf-Linzhi commented 4 years ago

The costs for making (from scratch) a competitive logic-only PoW miner (sha3/keccak256) in 2019 are far higher than for making a competitive Ethash/ProgPoW/RandomX miner. We put together a nice piece around this https://medium.com/@Linzhi/history-of-bitcoin-mining-hardware-60be773e5f5d

YazzyYaz commented 4 years ago

I am in favor of this proposal to move to Keccak256

Wolf-Linzhi commented 4 years ago

I am against ECIP-1049 (sha3/keccak256) for the following reasons:

serialp commented 4 years ago

Is keccak256 suitable?

Yes. It is well-known and widely implemented. No major chain uses it. ETC will be the most valuable keccak256 chain. Semi-optimized GPU implementations exist. It is available on most chains we will want to interoperate with. It is extremely unlikely to be broken or backdoored.

Summary

Change algorithms. Keccak256 is one of the better choices available.

Totally agree with that proposal.

Another one would be about implementing a solution of multiple algorithms with a limitation of each to a certain percentage like 20% whereas the 51% attack would be nearly impossible to realize like digibyte implemented on their network ?

https://digibyte.io/about-digibyte-blockchain

They even put recently in place a new algo adocrypt (based on SHA3) that change itself every 10days. How about that...

The wiser way would be to go not only with Keccak256, but to have multi-mining algorithms in a more decentralized & secured way

serialp commented 4 years ago

Also if we consider that we are in an emergency situation, There’s a good work around that could also be considered as a permanent solution which is pirl guard implementation https://medium.com/pirl/pirlguard-innovative-solution-against-51-attacks-87dd45aa1109

I have whitenessed callisto.network emergency HF during its 51% attack implementing Pirl guard solution with the assistance of pirl guard team, and I was a success, the attack has stopped right away after the HF until now.

https://amp.reddit.com/r/CallistoCrypto/comments/av8xeg/emergency_hard_fork_callisto_will_implement/

phyro commented 4 years ago

Pirl guard is subjective because it introduces reasoning based on the local state. This breaks the objectivity of node judgment because now nodes can have a different opinion about a specific chain. If we decide to break the objectivity of the consensus algorithm, we might as well put a max reorg cap because it is simpler and achieves the same (it's actually better because you get finality for a chain that is blind about the chains that differ for more than K blocks where K is the max reorg cap. Both can lead to forks though, that's the issue).

Having multiple mining algorithms is an interesting approach but we have never considered the implications of going this way.

serialp commented 4 years ago

Having multiple mining algorithms is an interesting approach but we have never considered the implications of going this way.

You can mine DigiByte on one of five separate mining algorithms. Each algo averages out to mine 20% of new blocks. This allows for much greater decentralization than other blockchains. An attacker with 99% of of any individual algorithm would still be unable to hardfork the blockchain, making DigiByte much more secure against PoW attacks than other blockchains.

More info: https://github.com/digibyte/digibyte/blob/master/README.md

lookfirst commented 4 years ago

@serialp Yea, just don't make the same mistake as Verge did awhile back.

TheEnthusiasticAs commented 4 years ago

OK, maybe then etc should have 50% ethash & 50% keccak algo?

serialp commented 4 years ago

@serialp Yea, just don't make the same mistake as Verge did awhile back.

Hello @lookfirst I was not talking about Verge but Callisto instead. Plus we learn from mistakes right ?

Anyway apparently i am not helping, i thought it was a free and open instructive discussion. But look like my suggestions are not welcomed by some people and i have been blocked from a twitter account to avoid posting.

So i will step out and let the "right people" discuss about it.

All the best for ETC.