ethereumproject / ECIPs

The Ethereum Classic Improvement Proposal
55 stars 47 forks source link

ECIP-? Protection from transaction replays #7

Open arvicco opened 7 years ago

arvicco commented 7 years ago

Good summary of the issue: https://github.com/ethereumclassic/README/issues/18 Current 'solutions' to the problem: https://github.com/ethereumclassic/README/issues/3 Additional discussion: https://github.com/ethereumclassic/README/issues/8

arvicco commented 7 years ago

@igetgames was going to take a stab on this ECIP

elaineo commented 7 years ago

I created ECIP1012 as one possible solution (increment starting nonce) to protect against replay attack. I know that @avtarsehra and @igetgames were discussing alternative solutions as well.

marcusrbrown commented 7 years ago

I'm currently writing a ECIP draft based off of the discussion in this issue: ethereum/EIPs#134. While adjusting the start nonce would workaround the issue, you would still run into the problem on other chains, such as private ones. I also explored modifying the values used in the ECDSA signing algorithm used for transactions, but this would still allow motivated attackers to replay those transactions on another chain.

Incorporating the blockhash of a recent or "confirmed" (listed in a contract) block into the transaction header would offer the best defense across any chain or fork. I'll have a PR ready for discussion soon.

elaineo commented 7 years ago

@igetgames ok cool. I hadn't thought of that as a potential solution.

splix commented 7 years ago

I'm also voting for idea suggested by @aakilfernandes, it is most simple and straightforward idea

realcodywburns commented 7 years ago

I like it. It seems like an elegant solution. A motivated attacker will find a flaw in any system. Our core proposal need only prevent careless attack vector s from being opened, not solve all problems in the world

realcodywburns commented 7 years ago

And I half thought the next ecip would be 1100 instead of 1012 ha ha

marcusrbrown commented 7 years ago

ECIP has been submitted as #9. Please close this issue and discuss there.

arvicco commented 7 years ago

Copied from a relevant 2016-10-02 Slack#general discussion

avtarsehra [4:03 AM] Alternative approach for replay attack fix, that does not require fundamental change in transaction structures and or changes to any raw symbolic values e.g. nonce: https://github.com/avtarsehra/ECIPs/blob/master/ECIPs/ECIP1011.md

bitnovosti [7:53 AM] Do I understand it correctly that your approach is different from @igetgames in the respect that it doesn't add an extra field to the transaction? But his approach is more flexible as it could be applied not to this hard fork only, but for all the HF to come, and also allows both replayable and non-replayable tx, thus maintaining legacy tx format compatibility?

avtarsehra [7:56 AM] This approach wouldn’t require modification of the transaction structure. From the discussions in the other channel I think @igetgames’s approach was the one we discussed initally, of selecting an appropriate value and adding that as an extra field to the transaction array. This would increase the size of transactions. This approach, suggested here, could also be used for all future hard forks, as you can change the arbitrary constanct from Netc to a future value.

bitnovosti [7:57 AM] But this increases the tx size by hash value, correct? (@igetgames's proposal, I mean)

avtarsehra [7:58 AM] The one I suggested here wouldnt increase size at all. You just calculate the hash of the transactions using an addon, that is not relaly stored. It is just accepted protocol. Similiar to the BTC protocol.

bitnovosti [7:59 AM] Can we marry the two approaches?

avtarsehra [7:59 AM] yes ofcourse.

bitnovosti [8:00 AM] In my view, it would be preferable: not to increase tx size, but retain both legacy compatibility and flexibility for future HF. best of both worlds. It has the added benefit that replay protection will be available ONLY if people use ETC-aware wallets... Which will lead to ETH devs being forced to pay attention to our standards. So, I'm really excited about such a combined approach.

avtarsehra [8:10 AM] I will talk to @igetgames as the ecip spec doesn't contain details on how the extra field would be added, and if this would require a modification of the blockchain data structure too or only the state DB. The approach I suggested would require no structural changes of the transactions and or the storage of information. But again, happy to combine with other ideas and come up with a stronger method

splix [8:14 AM] There is one thing - in current form it suggests to use a concrete block 1920k to sign. And it seems that for every new fork it will require a code change. That’s like one time fix. I mean if we already know fork block and have enough time (like 24 hour) it’s not necessary to do change code, as we can use a split contract, etc. What we really need that at any point when fork is happened (like someone deliberately decided to do a fork) both fork should coexists without any conflict. Replay Attack is not the thing happened on block 1920k, but a real attack that can happen during microforks (which we can have a dozens everyday, especially with PoS). And Replay Attack protection supposed to help in this situations

dontpanicburns [8:15 AM] Legacy compatibility is important. If someone were to make a private chain based on geth it should not illogically fork at predictable bloclk numbers. Imagine explaining to a bank that using our code will open an attack vector at a know poiny

avtarsehra [8:16 AM] @splix, in the approach I suggested the only change would be one parameter update. or you could you a dynamic parameter, but this would definelt require an extra field in the transaction structure.

dontpanicburns [8:20 AM] The coup de grâce fork in 2018 will need to remove all wonky forks, apply all fixes , and kick ass

splix [8:20 AM] idea by @aakilfernandes about adding a hash of last valid block (not sure @igetgames caught this moment in his ecip) will protect in this situations, as tx will be valid only in a chain that have this block in one of X latest blocks. Clients are in control of this fields, and they will be able to decide which chain they want to follow at any moment. It can be a private chain fork, or community split, it doesn’t require any change to the code after that moment

avtarsehra [8:21 AM] @splix that will be a bit odd, and could only work if you have an extra field that contains a reference to that block.

splix [8:22 AM] yeah, i’m talking about this extra field.

avtarsehra [8:22 AM] ah ok. in that case that would be ok. But I am looking at a way to do that without adding that extra field.

bitnovosti [8:23 AM] yes, doing it without an extra field would be preferable

splix [8:23 AM] i’m wondering, maybe instead of hardcoded block 1920k, your ecip can use some other predictable block number? i mean automatically

avtarsehra [8:23 AM] @splix I was looking into exactly that

splix [8:23 AM] like last in 100k

bitnovosti [8:24 AM] @splix, clients searching the blocks themselves, to see if block hash matches?

avtarsehra [8:24 AM] yes, I was plannig to test an idea where the hardfork replay attack would be resolved within 5 days. This would be long enough to stabilise chain, but not too long that stale transactions would become useless

bitnovosti [8:24 AM] if they get something unexpected, instead of null or block 1920k?

splix [8:25 AM] yeah, like try with different blocks (say 3 of last 100000x blocks), and accept only if one of them is valid for tx signature

avtarsehra [8:26 AM] Yes that is a nice idea. It allows a dynamic resolution of forks but doesn't require bloating of transaction structures

splix [8:29 AM] one of the benefits with extra field that it adds extra protection for dapps. if a business want to secure set of ordered transactions. Say you’re exchanging ETC to some biz token, then when your app gets the (1) money in block X and (2) updates contract in one of the next transactions, but adds that this tx is going to be valid only after block X. You know that (2) will happen only if (1) happened before. So it will save from forks or 51% attack against this business. I was actually thinking about same idea for this case, how to protect dapp when we can expect many microforks in PoS world. But it also works for our current HF Replay Attack situation

igetgames [8:33 AM] @splix The last stable block is good, but it can be an extension to my proposed ECIP. I’m assuming you’re talking about the contract that tracks “syncpoints”? Uh, why are we not trying to add an extra field? This is a proposal for the HF. I’m going to submit the same ECIP as a EIP to Ethereum

avtarsehra [8:34 AM] @igetgames I was looking at ways to do this without the extra field to keep size of transactions down. I am not 100% against this, but would like to avoid.

splix [8:35 AM] I’m talking about:

In each transaction (prior to signing), include the following

  1. The blockhash of a recent block
  2. A single byte blocklimit In order for a transaction to be valid, it must be included in the blockchain within blocklimit blocks of a block with hash blockhash. Transactions with a blocklimit and blockhash of 0 are always valid, regardless of the chain history.

igetgames [8:36 AM] Yeah that was the original proposal. Blocklimit is a waste of space if the pool of recent blocks is a fixed number like 256. So not needed. Recent block is good, because it adds the ability to expire transactions. But IMO it’s a hack, that should be a separate field

splix [8:37 AM] yeah, singlebyte is useless

igetgames [8:37 AM] And you still would need to accept a null hash for offline/hardware wallets. So this condenses them down into a single field with 3 levels of protection/flexibility. 1) You can give up protection and be valid across all compatible chains (null hash) 2) You can have some protection against a chain and its compatible forks (genesis hash) 3) You can have absolute protection with no chance of replay using the last fork block hash The idea is to also keep client state simple. Clients already have to have the block number of the most recent fork (and potentially an upcoming fork), because they have to change logic depending on the block height. We will be writing code to activate at 3000000. All clients have to know what that block means. Any clients that don’t upgrade, they won’t have the correct hash for that block, and their transaction won’t be valid and can’t be replayed (Unless they chose a less secure option for more flexibility)

@splix But I like the recent block pool, including the contract, but I wanted to keep it simple to start with, since we are against the clock

@avtarsehra The parameter one is good, I think we should keep it as an addition. The problem with it standalone is that a malicious actor could still replay her own transactions since they control the private key. They could resign it for either chain trivially

avtarsehra [8:45 AM] But replay of your own transactions wouldnt really be an attack, as it would only be transferring value that you already own? you could do that either way. but you would still need to resign… so not much differnt from creating a new transaction and resigning. The same transaction in ETH could not just be dropped in ETC, as you would need to resign using the ETC protocol

igetgames [8:46 AM] The other thing is that changing the signing parameters doesn’t protect you on the same chain or fork. Just vs. ETH

avtarsehra [8:46 AM] This would generate the appropriate w, r, s parameters (edited)

dontpanicburns [8:47 AM] Iirc our version of geth/eth has no unique identifier for block 1920k. It wouldn't make sense to tie it to that block. The floating check point is a cleaner solution (edited)

igetgames [8:49 AM] @dontpanicburns Huh? It’s all over the place, it has to be

dontpanicburns [8:50 AM] I don't have it in front of me, but I thought geth forked the pre choice version of code.

avtarsehra [8:50 AM] @dontpanicburns yes, like in the Yellow Paper the Reference the Homstead block. This is why we could Reference the HF block, which would provide the constant value for the unique message hash for the ETC chain. But Splix’s point was good that we could try and make that dynamics.

igetgames [8:50 AM] Yes, you can, but you have to keep in mind light clients and client state.And that proposal is good, it’s more complicated though. A client that doesn’t know what chain or fork it’s own is a malfunctioning client, imo

dontpanicburns [8:52 AM] And if it is in the code, if I make a private chain from genesis will it split at the fork blocks?

avtarsehra [8:52 AM] @igetgames yes agree with that point. This is why I thought the best approach was the static value, as it is easy to implement, secure, and future proof with a miner parameter change for future forks. But that parameter can also be calculated from the senders address

igetgames [8:53 AM] @dontpanicburns It shouldn’t, no

avtarsehra [8:53 AM] So that extra parameter is just the senders address, that is appended and hashed to provide the message hash, which will be signed

igetgames [8:53 AM] That would be a broken client

dontpanicburns [8:55 AM] It would have to follow the if block check logic even on a PC, wouldn't it? For difficulty calculation

igetgames [8:56 AM] I don’t follow. There is one of three values to test in my ECIP: Null hash, genesis hash, and last fork hash. In the pool proposal, if you say you want most recent blocks, then you’d set a size like 256 hashes

splix [9:23 AM] About replay attack, want to clear my position. I don’t think it’s really important to fix big forks like 1920k, because for that situation you can have time to prepare. But i’m what i’m worried about is temporarily forks, which can happen during a day, when a malicious miner want to do 51% attack. This attack lasts for hour or two (it’s less than $3000 per hour currently, not a big deal). We need a protection for this attacks. We'll not have time to update code and force every node to update during this hours. We need something dynamic, that works for less few hours long forks. And I believe that any working protection for this small network split, will work for larger forks too

dontpanicburns [9:25 AM] With the additional field, it could be added at any time correct? It doesn't 'need' block 3mil to function. The difficulty bomb requires coordination around a block. So at block 3mil, every chain made from our core will split. If I ran a privChain from genesis to 3mil , my chain will split. If I'm a business with potentially enough money to motivate someone, they could wait for 3mil(or 1920k) and launch an attack. The replay prevention does a good job of stoping the attack. But removing the attack vector completely would be a good long term goal, ie removing daofork and die-hard and leave only a clean difficulty adjustment.

splix [9:30 AM]
in our code (not sure about EF Geth) this block numbers are configurable. we can even move this configuration to an external file. so you can run your privchain without bomb delay at all. formula is written for block 3m, but I guess you can change this config to some other block, say 2m, and it will work for your private chain without any problem

avtarsehra [9:41 AM] @splix I have made a change to my ECIP, and added an alternative at the bottom, where the arbitrary value for signing is updated after a certain epoch (a number of blocks after the hard fork). This is incrementally increased after each epoch: See the update here: https://github.com/avtarsehra/ECIPs/blob/master/ECIPs/ECIP1011.md

This would mean all future hard forks will be resolved after one epoch. But we would need to define what an appropriate epoch is, as it can’t be too short, and having one that is too long would make resolutions after a fork take too long. This also means that light clients would need access to a blockchian indexer, to gain the transactions hash of the reference block at each epoch. Not sure how much of an issue this light client point is, as it can easily be optained through api’s. (edited)

dontpanicburns [9:47 AM] Right, for geth you can just generate a custom genesis block. But the current chain config doesn't look like it has an option for genesis block number. I'm working on templating the clients for private chains. Even with a custom genesis block , it still has to flow over all the difficulty logic. Creating a 'custom' private chain version of geth is an option. Removing every difficulty except the basic w/o bomb

splix [9:52 AM] it has config for a genesis in the geth code, it’s kind of a configuration, but a hardcoded one. but it’s possible to make it external if we need that, we can make a json file with genesis and all forks blocks options

dontpanicburns [9:54 AM] https://github.com/Azure/azure-quickstart-templates/tree/master/go-ethereum-on-ubuntu

This is the eth version. W json for genesis. I'm modifying it to pull from our other azure templates, but I would need tho update genesis.go and the json

splix [10:03 AM] @avtarsehra do you think it’s possible to add support for microforks in your ecip, ideally with less that 500 blocks in it? (edited)

avtarsehra [10:12 AM] @splix technically you could have very short times, but danger would be that if you broadcast a signed transaction and it takes a while to be mined you wouldn’t want the epoch to have moved on - which would make the transaction “stale”.

igetgames [10:12 AM] @splix My ECIP handles temp forks. You don’t want 51% blocks valid after the clients are fixed against it

splix [10:15 AM] @avtarsehra it can be not just one epoch block but any of X last known epoch blocks

avtarsehra [10:16 AM] @splix that is a good idea

dontpanicburns [10:18 AM] So a trail of blocks would validate, not just one. Sounds reasonable. A chain of blocks.... But spaced out by x blocks and limited in length

splix [10:21 AM] @igetgames yes, I know. we just trying to find how it can be used w/o extra field. I personally like idea of a particular parent block for a tx, it works perfectly for any microforks. but also agree that we should avoid adding extra data to tx if we can

avtarsehra [10:21 AM] length would be equal to the defined epoch

splix [10:22 AM] yeah, x blocks, say for epoch every 256th block (about an hour) it can be x=24. just one thing, it requires extra cpu resource for validation, as for every tx you should try it with dozen of hashes until you’ll find a valid one

dontpanicburns [10:23 AM] Super blocks are cool. It would be incredibly hard to bypass

avtarsehra [10:23 AM] What’e the longest time a transaction can take to be mined?

splix [10:25 AM] there is another benefit of @igetgames ecip - it works for offline transactions, signed without access to current blockchain info. works with a workaround btw (0x000 hash), so maybe we can do same here too. hm, or we can support old-style signature. with only nonce. it means compatibility with all current transactions, and can be introduced at any moment without breaking anything. i mean use both types for validation

igetgames [11:43 AM] I don’t understand the “without an extra field” thing. You’re optimizing the wrong thing. But I understand the epoch method. It’s the same as adding a reference block hash field, but if it slides by epoch, then there is a ton more state that the client has to keep track of, or computational complexity the receiver pays to validate a block. In that case, I would recommend the proposal on the EF thread of having a contract that stores reference block hashes, and clients only need to lookup/cache the values in the contract, with no additional computational complexity. All in all, I’m for the simplest solution that has minimal client changes + the best protection. There are several other ECIPs we should try and get in for 3000000.

splix [12:09 PM] I like both solutions, your is more flexible imho, Avtar’s can be applied w/o big changes to the protocol. I also understand that you have to pay for protection, either by cpu (= sign) or space (= field). And I don’t know which is better. Especially considering the fact that you need this protection for less that 0.1% of transactions

avtarsehra [12:11 PM] Personally I don’t have a preference as am studying up on the final one, i.e. the reparemterisation of w, s, r signing values. But I think even for those, that will be similiar to my current approach i.e. it wont future proof against further hard forks. I think wthout a direct reference to the chain you can’t make a future proof change.

splix [12:13 PM] what i’m sure we made a good progress today, after all discussion about different approaches. we have better understanding and few good ideas

avtarsehra [12:13 PM] But having a direct reference to the chain means you can’t construct transactions without some access to online information e.g. transaction hash of some reference point. Yes I think for such changes it is good to have a deep debate. Actually would be good if we could keep these conversations as reference. maybe we should be discussing this on github….

elaineo commented 7 years ago

@avtarsehra can you turn ECIP1011 into a PR to create a persistent discussion thread?

Without reading the above convo, it looks fairly reasonable and has fewer client changes than ECIP1012.

avtarsehra commented 7 years ago

@elaineo I have just done that now.

mikeyb commented 7 years ago

Status check? Seems ECIP 1011 doesnt exist and I believe replay protection was added in Diehard fork. Think this can be closed now?

avtarsehra commented 7 years ago

@mikeyb Yes that is right. I will close.

realcodywburns commented 7 years ago

Does this issue still need to be opend, Im cleaning up.