openethereum / parity-ethereum

The fast, light, and robust client for Ethereum-like networks.
Other
6.82k stars 1.69k forks source link

60 TPS ? (parity aura v1.11.11) #9393

Closed drandreaskrueger closed 5 years ago

drandreaskrueger commented 6 years ago

I am benchmarking Ethereum based PoA chains, with my toolbox chainhammer.

My initial results for a dockerized network of parity aura v1.11.8 nodes ...

... leaves space for improvements :-)

Initial -unoptimized- run:

chainreader/img/parity-poa-playground_run1_tps-bt-bs-gas_blks108-211.png

More details here: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#benchmarking

Please help ...

... by suggesting what we could try, to get this faster than 60 TPS.

(Ideally approx 8 times faster, to beat quorum IBFT.)

Thanks a lot!
Andreas


I'm running:

  • Which Parity version?: v1.11.8
  • Which operating system?: Linux
  • How installed?: via docker
  • Are you fully synchronized?: not relevant
  • Which network are you connected to?: private chain
  • Did you try to restart the node?: yes
  • actual behavior slow
  • expected behavior faster
  • steps to reproduce: See parity.md.
Tbaut commented 6 years ago

There is definitely a lot of room for improvement :) Can you share your setup:

ddorgan commented 6 years ago

There are common options to help here. They include:

--jsonrpc-server-threads maybe set to 4 or 8.

--tx-queue-size maybe set to 16536

And also scaling verification via: --scale-verifiers

5chdn commented 6 years ago

What's the block gas limit and aura block time?

Please share config and chain spec.

drandreaskrueger commented 6 years ago

Fantastic, thanks for all the hints (and the tweet ;-) )

Answers are all in here, probably mainly below here.

The author/issue-answerer of that parity-poa-playground seems grumpy - so I am happy that the parity-deploy.sh team is really responsive & helpful. Will try that again instead, next week.

Back in the office on Tuesday. Really looking forward to an optimized run. Have a good weekend, everyone.

5chdn commented 6 years ago

What's the block gas limit and aura block time?

Ok, I see, this is pretty much your answer ^

You have three authorities with one second block time and a gas floor target of 6 billion.

This is a very good configuration to test TPS however, it does start with a lower block gas limit as it moves slowly up to the target. Did you consider running it for an extended period of time (hours, days) or simply modify the network configuration to start with a very high block gas limit, yet?

drandreaskrueger commented 6 years ago

...

drandreaskrueger commented 6 years ago

--jsonrpc-server-threads maybe set to 4 or 8.

Great. That looks promising. Earlier in my endeavour, I was surprised to see how little multithreading actually helped with parity (*) but this could explain that yes.

Is that a parity only setting, or can geth do that too?

--tx-queue-size maybe set to 16536

I think that is set already high, no?

drandreaskrueger commented 6 years ago

(*) Actually, back then it was the energywebfoundation "tobalaba" fork of parity. By the way, I think they left some issues unanswered, perhaps anyone of you guys has ideas; after all it is parity 1.8.0, right?

5chdn commented 6 years ago

don't use the ewf client. parity ethereum now supports chain tobalaba

drandreaskrueger commented 6 years ago

parity ethereum now supports chain tobalaba

Great.

Did I loose time on benchmarking their outdated client then? Tobalaba was one big hickup, until they fixed that.

Not sure I will get the time now to repeat all Tobalaba benchmarking. But feel free to do that yourself, chainhammer is not difficult to use. (Then please pull request, and I include that into chainhammer. Thanks.)

don't use the ewf client.

Is it completely integrated into parity now, with all its EWF added functionality?

Is tobalaba PoA also Aura?

5chdn commented 6 years ago

Is it completely integrated into parity now, with all its EWF added functionality?

It always has been. They just rebranded the client.

Is tobalaba PoA also Aura?

Yes

drandreaskrueger commented 6 years ago

This is a very good configuration to test TPS

Good. Still, if you know a better setup than that, I am happy to try that next week.

however, it does start with a lower block gas limit

not sure about that.

(1) Look at the bottom right diagram here, it shows gasUsed and gasLimit
and gas is not maxed out, in contrast to other runs (scroll down all the way here in this early run - there the TPS was clearly limited by gasLimit).

(2) Also, gasLimit is set to be 0x165A0BC00 = 6 billion, no?

But I might not understand all those parameters yet. Then sorry. What can I read, which parameters are influencing TPS?

Most importantly, as my time is limited:

simply modify the network configuration

Feel free to (simply run chainhammer yourself, or) send me any other configuration that you think will perform better. Does authority.toml & chain.json fully define that, or are there more settings files?

Thanks a lot.

drandreaskrueger commented 6 years ago

... please send me any other configuration authority.toml & chain.json that you think will perform better.

Thanks.

drandreaskrueger commented 6 years ago

new tests

I have tried your suggestions.

But

no acceleration!

See https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run2 and below.

63 TPS

that is slow.

... new ideas please. Thanks.

drandreaskrueger commented 6 years ago

new run6 with some more parameters added, see description of the run in https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run6

--> 65 TPS

ddorgan commented 6 years ago

Just replicating your setup now. But maybe a --gas-floor-target of something more realistic would be a good idea ... e.g. maybe 20m ... Also moving the stepDuration to about 3 to 4 seconds would be needed in a real life situation when not just benchmarking everything on one host.

drandreaskrueger commented 6 years ago

Just replicating your setup now.

Great, I am happy. Thanks a lot for your time, @ddorgan @5chdn and @Tbaut. Let's find out how we can get parity faster - ideally as fast as geth, no?

And? Got it running? Which rates are you able to see? Or: Need help?

--gas-floor-target of something more realistic would be a good idea ... e.g. maybe 20m ...

Thanks. Explanation:

--gas-floor-target Amount of gas per block to target when sealing a new block (default: 4700000).
https://hudsonjameson.com/2017-06-27-accounts-transactions-gas-ethereum/

Please tell me about all those parameters which might be able to accelerate the current setup. A warning: This whole benchmarking is admittedly not a "realistic" setup which rates to expect when running a network of nodes distributed on the planet. Internet bandwidth, ping times, etc. will always slow it down; I am looking at it more as an attempt to identify the current upper limit; any realistic setup will always be slower than what I have measured. However, if you want to, you can quite as well create a network with nodes on each continent, and then use chainhammer to benchmark that. So:

Also moving the stepDuration to about 3 to 4 seconds

Yes.

However, I have tested most of the other systems with a fast block rate too; my initial focus was on quorum, and raft consensus has sub-seconds block rate, and quorum IBFT runs without problems with 1 second block times - without the internet, of course.

Parity however is not even adhering to its own parameter, I suppose this is a target blocktime of 2 seconds, right? - and the run with 4 nodes then results in 4-8 seconds blocktime ! So aren't we already at the blockspeed that you are suggesting?

These are the parameters that I have been adding to the standard https://github.com/paritytech/parity-deploy

--geth 
--jsonrpc-server-threads 100
--tx-queue-size 20000
--cache-size 4096
--tracing off
--gas-floor-target 20000000
--pruning fast
--tx-queue-mem-limit 0
--no-dapps
--no-secretstore-http

because I found them somewhere in an issue about speed. Which of them are irrelevant?

drandreaskrueger commented 6 years ago

65 TPS

settings & log of run7: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run7

results diagrams: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#result-run-7

https://gitlab.com/electronDLT/chainhammer/raw/master/chainreader/img/parity-aura_run7_tps-bt-bs-gas_blks3-90.png

There is a new README.md --> quickstart now ...

... so if you have any intution or knowledge how to accelerate this, please replicate my setup, and then start modifying the parameters of the network of parity nodes, e.g. with parity-deploy - until you get to better TPS rates.

Then please alert us how you did it. Thanks.

drandreaskrueger commented 6 years ago

I found a new upper limit of 69 TPS, but with instantseal and only 1 node.

AyushyaChitransh commented 6 years ago

In around June(don't exactly recall which parity version), we made some scripts to transfer ether from one account to another account. This script pushed transactions to 5 server setup(across different geographical regions) and we were able to achieve maximum TPS of 5001.

These results were obtained from not one but multiple blocks.

Environment:

Other than this, there were no more changes to --jsonrpc-server-threads

Most likely this low TPS that you are able to achieve is because of contract transactions.

Cause:

Contract transactions are more heavy in terms of amount of gas used. So few transactions are able to completely fill up the block. Maybe increasing gas limit would allow you to get a higher TPS.

Regarding instantSeal: This may not be required in benchmarking any ethereum client, as it is not built to be decentralised (the core idea of blockchain). Even if one person was able to get a higher TPS on instantSeal, this blockchain would not be of much use to other willing to be a part of that network.

5chdn commented 6 years ago

I don't have time to reproduce this myself right now @drandreaskrueger - but 65 sounds wrong by orders of magnitude. what @AyushyaChitransh reports sounds more realistic.

drandreaskrueger commented 6 years ago

we were able to achieve maximum TPS of 5001.

Your TPS benchmarking sounds impressive.

Please (someone) publish the exact commands - perhaps in a dockerized form, so that we can replicate it easily within a few minutes.

If it can be replicated, I am happy to include it here in my repo.

Most likely this low TPS that you are able to achieve is because of contract transactions.

Yes to "lower" but no to "low" - because I am putting geth, quorum, etc through the exact same task.

And: Simple money transfer is not relevant for our use case.

My storage .set(x) transaction takes less than 27000 gas, which is at least 10 times your gas usage, right? (your 8000029/3001= ...) So if gas were the only parameter, I should be able to see 300-500 TPS - which I do with geth, but not with parity. Something else but the EVM must be slowing down parity.

I am always using the same transaction, on geth, quorum, parity, etc. - so the TPS values are comparable, right?

I have never benchmarked simple money transfer, because that is not what we do. Instead we need to choose the fastest client for smart contract transactions, and that is currently quorum IBFT (with over 400 TPS) or geth Clique (with over 300 TPS), and they were measured in the exactly same way as I measured parity. Please see quorum-IBFT.md and geth.md. It is a pity, because for years we have always preferred parity and it would mean that we have to revise quite a bit of our inhouse code - but we simply cannot ignore a 6 times faster TPS.

@AyushyaChitransh please you repeat your benchmarking with a simplemost smart contract transaction - storing one value in a contract (or doing one multiplication and one addition - I still want to do that, but haven't had the time yet, see TODO.md).

See the call and the contract.

Maybe increasing gas limit would allow you to get a higher TPS.

Done that, been there.

Please have a look at the bottom right diagram in each of my measurements, then you can see gasLimit and gasUsed per second. When I see one touching the other, I know that I have to raise the gasLimit. Which I did. In the last runs I used 40 million gas as a limit. But no, the blocks were not maxed out.

Thanks for all your answers, but please spend some time looking at my stuff first, thanks.

StepDuration: 1, which ensures 1 block issuance per second

No, it doesn't. I left mine set to 2 seconds, but almost always it ended up to be 4-8 seconds.

across different geographical regions

That is a more realistic setup, to include the effects of the internet. For now, I am benchmarking the client itself, and all my 4-7 nodes are running on the same one machine, a 2016 desktop. But e.g. CPU is not maxing out, it stays around 50% during the whole benchmarking.

Regarding instantSeal: This may not be required in benchmarking any ethereum client

I know. Of course. But it is the most simplistic thing I can ask parity to do, and then I have still seen less than 70 TPS.

Please anyone now replicate that setup of run 8, with the now new quickstart manual of chainhammer. Or -EDIT- follow the exact prescription below.

(@AyushyaChitransh , please publish your experimental setup in a similar way, with the exact commandline commands to execute, so that others can replicate your 3k - 5k TPS. Thanks.)

I don't have time to reproduce this myself right now

I get that, @5chdn . We are all busy. And as you can see in my TODO.md list cited above, I also have some unfinished tasks with this. However, until you our someone else is disproving my findings, it already looks as if we might have our (yes, still preliminary) results:

For our purposes parity is ~6 times slower than geth .

65 sounds wrong by orders of magnitude

Yes, and I am suprised about that myself.

The whole intention of all these interactions here is to get anyone who is more knowledgable about parity than me, to help me find the problem - and fix it. If your team is too small, what about employing more people? Or perhaps there is someone else out there, who would work for no pay? Please help, thanks.

drandreaskrueger commented 6 years ago

chainhammer

Actually, today I tried this again - tested on and optimized for Debian AWS machine (debian-stretch-hvm-x86_64-gp2-2018-08-20-85640) - all this really does work:

How to replicate the results

toolchain

# docker
# this is for Debian Linux, 
# if you run a different distro, google "install docker [distro name]"
sudo apt-get update 
sudo apt-get -y remove docker docker-engine docker.io 
sudo apt-get install -y apt-transport-https ca-certificates wget software-properties-common
wget https://download.docker.com/linux/debian/gpg 
sudo apt-key add gpg
rm gpg
echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-cache policy docker-ce
sudo apt-get -y install docker-ce docker-compose
sudo systemctl start docker

sudo usermod -aG docker ${USER}
groups $USER

log out and log back in, to enable those usergroup changes

# parity-deploy
# for a dockerized parity environment
# this is instantseal, NOT a realistic network of nodes
# for 8 different setups see chainhammer-->parity.md
git clone https://github.com/paritytech/parity-deploy.git paritytech_parity-deploy
cd paritytech_parity-deploy
sudo ./clean.sh
./parity-deploy.sh --config dev --name instantseal --geth
docker-compose up

new terminal:

# solc
# someone should PLEASE create a Debian specific installation routine
# see https://solidity.readthedocs.io/en/latest/installing-solidity.html 
# and https://github.com/ethereum/solidity/releases
wget https://github.com/ethereum/solidity/releases/download/v0.4.24/solc-static-linux
chmod 755 solc-static-linux 
echo $PATH
sudo mv solc-static-linux /usr/local/bin/
sudo ln -s /usr/local/bin/solc-static-linux /usr/local/bin/solc
solc --version

Version: 0.4.24+commit.e67f0147.Linux.g++

chainhammer

# chainhammer & dependencies
git clone https://gitlab.com/electronDLT/chainhammer electronDLT_chainhammer
cd electronDLT_chainhammer/

sudo apt install python3-pip libssl-dev
sudo pip3 install virtualenv 
virtualenv -p python3 py3eth
source py3eth/bin/activate

python3 -m pip install --upgrade pip==18.0
pip3 install --upgrade py-solc==2.1.0 web3==4.3.0 web3[tester]==4.3.0 rlp==0.6.0 eth-testrpc==1.3.4 requests pandas jupyter ipykernel matplotlib
ipython kernel install --user --name="Python.3.py3eth"
# configure chainhammer
nano config.py

RPCaddress, RPCaddress2 = 'http://localhost:8545', 'http://localhost:8545'
ROUTE = "web3"
# test connection
touch account-passphrase.txt
./deploy.py 
# start the chainhammer viewer
./tps.py

new terminal

# same virtualenv
cd electronDLT_chainhammer/
source py3eth/bin/activate

# start the chainhammer send routine
./deploy.py notest; ./send.py 

or:

# not blocking but with 23 multi-threading workers
./deploy.py notest; ./send.py threaded2 23

everything below here is not necessary

new terminal

( * )

# check that the transactions are actually successfully executed:

geth attach http://localhost:8545

> web3.eth.getTransaction(web3.eth.getBlock(50)["transactions"][0])
{
  gas: 90000, ...
}

> web3.eth.getTransactionReceipt(web3.eth.getBlock(50)["transactions"][0])
{ 
  gasUsed: 26691,
  status: "0x1", ...
}
> 

geth

( ) I do not want to install geth locally, but start the geth console from a docker container* - but I don't succeed:

docker run ethereum/client-go attach https://localhost:8545

WARN [09-10|09:38:24.984] Sanitizing cache to Go's GC limits provided=1024 updated=331
Fatal: Failed to start the JavaScript console: api modules: Post https://localhost:8545: dial tcp 127.0.0.1:8545: connect: connection refused
Fatal: Failed to start the JavaScript console: api modules: Post https://localhost:8545: dial tcp 127.0.0.1:8545: connect: connection refused

Please help me with ^ this, thanks.


Until that is sorted, I simply install geth locally:

wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.11.linux-amd64.tar.gz 
rm go1.11.linux-amd64.tar.gz 
echo "export PATH=\$PATH:/usr/local/go/bin:~/go/bin" >> .profile

logout, log back in

go version

go version go1.11 linux/amd64

go get -d github.com/ethereum/go-ethereum
go install github.com/ethereum/go-ethereum/cmd/geth
geth version

geth version
WARN [09-10|09:56:11.759] Sanitizing cache to Go's GC limits provided=1024 updated=331
Geth
Version: 1.8.16-unstable
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.11
Operating System: linux
GOPATH=
GOROOT=/usr/local/go

please you now try this

And about "not having the time" - these 2.5 hours happened on my FREE DAY. I must convince them now that I can take those hours off again.

drandreaskrueger commented 6 years ago

So, I have invested one more workday, to make it even easier quicker faster simpler for you @AyushyaChitransh @5chdn @Tbaut and @ddorgan (or anyone else really) ... to have a 8 minutes look at the problem:

a readymade Amazon AMI

!!!

with everything pre-installed. Only look at reproduce.md#readymade-amazon-ami if you need to know what I did. Other than that, simply ...

... use my AMI:

simplify the ssh access, by adding this (with your IP, obviously) to your local machine's

nano ~/.ssh/config
Host chainhammer
  Hostname ec2-35-178-181-232.eu-west-2.compute.amazonaws.com
  StrictHostKeyChecking no
  User admin
  IdentityFile ~/.ssh/AndreasKeypairAWS.pem

now it becomes this simple to connect:

ssh chainhammer

parity

ssh chainhammer

cd ~/paritytech_parity-deploy
sudo ./clean.sh
./parity-deploy.sh --config dev --name instantseal --geth
docker-compose up

For more complex setups than instantseal, see parity.md.

If you want to end this ... 'Ctrl-c' and:

docker-compose down -v
sudo ./clean.sh

geth

ssh chainhammer

cd ~/drandreaskrueger_geth-dev/
docker-compose up

If you want to end this ... 'Ctrl-c' and:

docker-compose down -v

chainhammer: test connection

... and create some local files

ssh chainhammer
cd electronDLT_chainhammer && source py3eth/bin/activate

./deploy.py

If there are connection problems, probably need to configure the correct ports in config.py:

nano config.py

chainhammer: watcher

./tps.py

chainhammer: send transactions

ssh chainhammer
cd electronDLT_chainhammer && source py3eth/bin/activate

./deploy.py notest; ./send.py threaded2 23

Thanks for your time.

Any idea how to accelerate parity ... much appreciated!


EDIT:

Updates of this at https://gitlab.com/electronDLT/chainhammer/blob/master/reproduce.md#readymade-amazon-ami

tnpxu commented 6 years ago

8960 just found this , can u try with this settings

drandreaskrueger commented 6 years ago

8960 just found this , can u try with this settings

Thanks. Yes, I am using almost all of those switches already, see results --> (A) parity aura.

I will only be in the office on Tuesday again, but until then:

Feel free to try and optimize further, that is why I have created the Amazon AWS image for you guys. With that you can reproduce my results within a few minutes - and then try to get parity 5-6 times faster, to catch up with geth. If you do not want to use Amazon, scroll up on that same page - I have logged every single step that you need to do.

Also, I have asked more people for help, see stackexchange#58521, and twitter#20180911 - please retweet, thanks. Any other place where Ethereum/Parity people hang out?


EDIT:

8960 just found this , can u try with this settings

Thanks a lot, @tnpxu - I have tried those settings, and they do not accelerate parity.

drandreaskrueger commented 6 years ago

Final verdict:

hardware node type #nodes peak TPS_av final TPS_av
t2.large parity 4 53.5 52.9
t2.xlarge parity 4 56.5 56.1
t2.2xlarge parity 4 57.6 57.6
t2.2xlarge geth 3+1 421.6 400.0
t2.xlarge geth 3+1 386.1 321.5
t2.large geth 3+1 170.7 169.4
t2.small geth 3+1 96.8 96.5

Giving up now.

ddorgan commented 6 years ago

@drandreaskrueger fyi a good reference from Cody Born @ Microsoft ....

https://twitter.com/codyborn/status/1040081548135948288

Maybe less taxing contract calls though ...

drandreaskrueger commented 6 years ago

Thanks.

EDIT: I have spent some more time with Cody's page now. Some small corrections:

Scroll down to the "Development" --> "Programmatically interacting with a smart contract" chapter on https://docs.microsoft.com/en-us/azure/blockchain-workbench/ethereum-poa-deployment (page which is linked in that tweet).

What I see there:

(1)
his smart contract is roughly as simple as mine
I store an int, he stores a string.

That can't be it, right?

(2)
he is signing the transaction for deploying the smart contract himself ! And then sends a rawTransaction!

   tx.sign(privateKey);
   var raw = '0x' + tx.serialize().toString('hex');
   web3.eth.sendRawTransaction(raw, function (txErr, transactionHash) { ...

Perhaps that means he does the same when he hammers a large number of transactions at the node, but it is not unlikely.

Unfortunately he does NOT give us his benchmarking code anywhere. Or can you find it?

(3)
EDIT: He perhaps tells us his parity settings, but rather hidden - and in a hard to read format. I found this today:

"paritySpec": {"examples": [        "\n{\n \"name\": \"PoA\",\n \"engine\": {\n \"authorityRound\": {\n \"params\": {\n \"stepDuration\": \"2\",\n \"validators\" : {\n \"safeContract\": \"0x0000000000000000000000000000000000000006\"\n },\n \"gasLimitBoundDivisor\": \"0x400\",\n \"maximumExtraDataSize\": \"0x2A\",\n \"minGasLimit\": \"0x2FAF080\",\n \"networkID\" : \"0x9a2112\"\n }\n }\n },\n \"params\": {\n \"gasLimitBoundDivisor\": \"0x400\",\n \"maximumExtraDataSize\": \"0x2A\",\n \"minGasLimit\": \"0x2FAF080\",\n \"networkID\" : \"0x9a2112\",\n \"wasmActivationTransition\": \"0x0\"\n },\n \"genesis\": {\n \"seal\": {\n \"authorityRound\": {\n \"step\": \"0x0\",\n \"signature\": \"0x0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\"\n }\n },\n \"difficulty\": \"0x20000\",\n \"gasLimit\": \"0x2FAF080\"\n },\n \"accounts\": {\n \"0x0000000000000000000000000000000000000001\": { \"balance\": \"1\", \"builtin\": { \"name\": \"ecrecover\", \"pricing\": { \"linear\": { \"base\": 3000, \"word\": 0 } } } },\n \"0x0000000000000000000000000000000000000002\": { \"balance\": \"1\", \"builtin\": { \"name\": \"sha256\", \"pricing\": { \"linear\": { \"base\": 60, \"word\": 12 } } } },\n \"0x0000000000000000000000000000000000000003\": { \"balance\": \"1\", \"builtin\": { \"name\": \"ripemd160\", \"pricing\": { \"linear\": { \"base\": 600, \"word\": 120 } } } },\n \"0x0000000000000000000000000000000000000004\": { \"balance\": \"1\", \"builtin\": { \"name\": \"identity\", \"pricing\": { \"linear\": { \"base\": 15, \"word\": 3 } } } },\n \"0x0000000000000000000000000000000000000006\": { \"balance\": \"0\", \"constructor\" : \"…\" }\n }\n}"]

Is that it? But ... it says "example", so we don't know.

(4) He just does not tell us how exactly he benchmarked. I could not yet find his "... using a simple perf tool I put together...." code.

So his results are simply not reproducible.

If you find all of his code, feel free to point me to it. Thanks.

(5)
It is very probably not an issue of Azure vs Amazon. Yes, he says he sees 400 TPS on Azure.

I see less than 60 TPS on Amazon AWS, but the exact same Amazon AWS machine ... makes geth clique run much faster, at 400 TPS.

(6) What else can you read out of his article? What can you try to make chainhammer faster?

drandreaskrueger commented 6 years ago

Of all the above points ... only (2) is seems really relevant.

I am using contract.functions.set( x=arg ).transact(txParameters) via web3, or eth_sendTransaction via direct RPC call.

So I let the node do the transaction signing. How else would I do it, as 23 multi-threading threads are hammering at the same node, and it is not clear in which order the transactions are arriving, I would not know which nonce to use in each of those transactions - right?

In contrast he seems to sign the contract-deployment transaction himself, and then does a web3.eth.sendRawTransaction. Perhaps he does the same when hammering? We don't know, as I could not find his benchmarking code yet.

--> If that is the underlying reason for geth handling transactions 6 times faster than parity - then I suggest that you parity people have a long look at your parity transaction signing code - perhaps that slows down parity so dramatically? Perhaps compare your code with geth`s and see what they do differently.

EDIT:

Where is @CodyBorn's github repo, with his benchmarking code? That could solve questions.

drandreaskrueger commented 6 years ago

Giving up now

Again and again and again: If you don't believe my numbers, reproduce them yourself.

I have put an immense amount of work into this, with the single goal that you need not more than a few minutes to reproduce: https://gitlab.com/electronDLT/chainhammer/blob/master/reproduce.md#readymade-amazon-ami

Be cooperative for once, and run it, and tell me what you see. Please.

And then make a suggestion how to tune parity.

I really really really want to see it fast. Why? We at electron.org.uk have invested time into, and built our code around parity; we had been parity users. It would be a pain to switch to geth now. But:

Until you come up with faster parity --settings ... I must believe my sad results:

geth is at least 5 times faster than parity.

:-(

5chdn commented 6 years ago

I'm sorry to hear that.

drandreaskrueger commented 6 years ago

what?

drandreaskrueger commented 6 years ago

you close this issue?

You really don't want users, seemingly.

5chdn commented 6 years ago

Oh, I misunderstood, you said you are giving up?

drandreaskrueger commented 6 years ago

Yes. Because I do not know what else to try.

Re-read the last three comments please.

--> You run it. Show me what to do.

Thanks.

drandreaskrueger commented 6 years ago

run 14 = without the --geth compatibility switch
https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-14

--> slower not faster.

No further idea what to try.

gituser commented 6 years ago

@drandreaskrueger worth trying older version of parity (e.g. 1.8.x or 1.9.x or 1.10.x) and see if there is any difference at all.

There was some change in code signing in 1.10 or in 1.9 (not sure though).

drandreaskrueger commented 6 years ago

Thanks for the idea, @gituser

It might explain why I have seen faster rates with the Tobalaba fork (~ similiar to parity version 1.8.0)


And how?

In https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-14 it could be done here:

sed -i 's/parity:stable/parity:v1.11.11/g' docker-compose.yml

1.11.11 --> 1.8.x

BUT

curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/' | jq -r '."results"[]["name"]' | sort
beta
nightly
stable
v1.11.11
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.1

they have deleted all older versions.

hello parity team - could you please re-create the docker hub image of the most stable 1.8.x version?

Thanks a lot.

gituser commented 6 years ago

they have deleted all older versions.

weird, from github repository (just cloned freshly):

$ git clone https://github.com/paritytech/parity-ethereum
Cloning into 'parity-ethereum'...
cd pariremote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (46/46), done.
Receiving objects:  21% (28972/137960), 16.01 MiB | 10.58 MiB/s    
Receiving objects:  60% (82776/137960), 40.14 MiB | 11.34 MiB/s     
remote: Total 137960 (delta 12), reused 14 (delta 6), pack-reused 137908
Receiving objects: 100% (137960/137960), 55.64 MiB | 9.40 MiB/s, done.
Resolving deltas: 100% (99246/99246), done.
Checking connectivity... done.
$ cd parity-ethereum
$ git tag -l
beta-0.9
beta-0.9.1
beta-release
mac-installer-hotfix
nightly
stable-release
test-tag
v1.0.0
v1.0.0-rc1
v1.0.1
v1.0.2
v1.1.0
v1.10.0
v1.10.0-ci0
v1.10.0-ci1
v1.10.0-ci2
v1.10.0-ci3
v1.10.0-ci4
v1.10.0-ci5
v1.10.0-ci6
v1.10.0-ci7
v1.10.1
v1.10.1-ci0
v1.10.2
v1.10.2-ci0
v1.10.2-ci1
v1.10.3
v1.10.3-ci0
v1.10.3-ci1
v1.10.4
v1.10.5
v1.10.5-ci0
v1.10.5-ci1
v1.10.5-rc0
v1.10.6
v1.10.7
v1.10.7-ci0
v1.10.8
v1.10.8-ci0
v1.10.8-ci1
v1.10.9
v1.10.9-rc0
v1.11.0
v1.11.0-ci0
v1.11.1
v1.11.10
v1.11.11
v1.11.2-ci0
v1.11.3
v1.11.4
v1.11.4-ci0
v1.11.5
v1.11.5-ci0
v1.11.5-ci1
v1.11.6
v1.11.6-rc0
v1.11.6-rc1
v1.11.6-rc2
v1.11.7
v1.11.7-rc0
v1.11.7-rc1
v1.11.7-rc2
v1.11.8
v1.12.0-ci0
v1.12.0-ci1
v1.12.0-ci2
v1.12.0-ci3
v1.12.0-ci4
v1.12.0-ci5
v1.2.0
v1.2.1
v1.2.2
v1.2.3
v1.2.4
v1.3.0
v1.3.1
v1.3.10
v1.3.11
v1.3.12
v1.3.13
v1.3.14
v1.3.15
v1.3.2
v1.3.3
v1.3.4
v1.3.5
v1.3.6
v1.3.7
v1.3.8
v1.3.9
v1.4.0
v1.4.1
v1.4.10
v1.4.11
v1.4.12
v1.4.2
v1.4.3
v1.4.4
v1.4.5
v1.4.6
v1.4.7
v1.4.8
v1.4.9
v1.5.0
v1.5.10
v1.5.11
v1.5.12
v1.5.13
v1.5.2
v1.5.3
v1.5.4
v1.5.6
v1.5.7
v1.5.8
v1.5.9
v1.6.0
v1.6.1
v1.6.10
v1.6.2
v1.6.3
v1.6.4
v1.6.5
v1.6.6
v1.6.7
v1.6.8
v1.6.9
v1.7.0
v1.7.1
v1.7.10
v1.7.11
v1.7.12
v1.7.13
v1.7.2
v1.7.3
v1.7.4
v1.7.5
v1.7.6
v1.7.7
v1.7.8
v1.7.9
v1.8.0
v1.8.1
v1.8.10
v1.8.10-ci0
v1.8.10-ci1
v1.8.10-ci2
v1.8.10-ci3
v1.8.10-ci4
v1.8.10-ci5
v1.8.11
v1.8.11-ci0
v1.8.2
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.8.8
v1.8.8-ci0
v1.8.8-ci1
v1.8.8-ci2
v1.8.8-ci3
v1.8.8-ci4
v1.8.9
v1.8.9-ci0
v1.9.0
v1.9.1
v1.9.1-ci0
v1.9.1-ci1
v1.9.1-ci2
v1.9.1-ci3
v1.9.2
v1.9.2-ci0
v1.9.3
v1.9.3-ci0
v1.9.3-ci1
v1.9.3-ci2
v1.9.3-ci3
v1.9.3-ci4
v1.9.3-ci5
v1.9.4
v1.9.4-ci0
v1.9.5
v1.9.5-ci0
v1.9.5-ci1
v1.9.5-ci2
v1.9.5-ci3
v1.9.5-ci4
v1.9.5-ci5
v1.9.5-ci6
v1.9.6
v1.9.6-ci0
v1.9.6-ci1
v1.9.6-ci2
v1.9.7
v1.9.7-ci0
v1.9.7-ci1
v2.0.0
v2.0.0-rc0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.1
v2.0.3
v2.0.3-rc0
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.0-rc0
v2.1.0-rc1
v2.1.0-rc2
v2.1.0-rc3
v2.1.0-rc4
v2.1.1
v2.2.0-rc0

There are multiple pages in that link you sent!

Check - curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page=2'|json_pp curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page=3'|json_pp etc..

and yes there is no v1.8.x

drandreaskrueger commented 5 years ago

Oh fantastic, that should make it easy to test.

Pagination, sigh, had not expected that.
This is the command which returns the first 1000 tag names

curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page_size=1000'  | jq -r '."results"[]["name"]' | sort -t. -k 1,1n -k 2,2n -k 3,3n

so, all older version are still on dockerhub, that is perfect:

beta
gitlab-next
latest
nightly
stable
v2.0.0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.1
v2.0.3
v2.0.3-rc0
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.0-rc1
v2.1.0-rc2
v2.1.1
v1.5.13
v1.6.8
v1.6.9
v1.6.10
v1.7.0
v1.7.1
v1.7.2
v1.7.3
v1.7.4
v1.7.5
v1.7.6
v1.7.7
v1.7.8
v1.7.9
v1.7.10
v1.7.11
v1.7.12
v1.7.13
v1.8.0
v1.8.1
v1.8.2
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.8.8
v1.8.9
v1.8.10
v1.8.11
v1.9.1
v1.9.2
v1.9.3
v1.9.4
v1.9.5
v1.9.6
v1.9.7
v1.10.0
v1.10.1
v1.10.2
v1.10.3
v1.10.4
v1.10.5
v1.10.6
v1.10.7
v1.10.8
v1.10.9
v1.11.0
v1.11.1
v1.11.3
v1.11.4
v1.11.5
v1.11.6
v1.11.7
v1.11.7-rc0
v1.11.7-rc1
v1.11.7-rc2
v1.11.8
v1.11.10
v1.11.11

thanks! have a good weekend.

drandreaskrueger commented 5 years ago

I would probably try v1.7.13, to be on the safe side.

Did v1.7 already have aura?

Did it have instaseal developmentChain, so would parity-deploy.sh --dev work ?
That already would show something probably.


And how? In https://gitlab.com/electronDLT/chainhammer/blob/master/reproduce.md#parity change

sed -i 's/parity:stable/parity:v1.11.11/g' docker-compose.yml

1.11.11 --> 1.7.13

or (because Tobalaba)

1.11.11 --> 1.8.0 

or (because stable)

1.11.11 --> 1.8.11
5chdn commented 5 years ago

hello parity team - could you please re-create the docker hub image of the most stable 1.8.x version?

we don't delete anything. just checkout the tag directly, either on github or on docker

Did v1.7 already have aura?

yes.

drandreaskrueger commented 5 years ago

yes.

thanks.

drandreaskrueger commented 5 years ago

hello gituser, Re: your comment above

@gituser
gituser commented 4 days ago •
@drandreaskrueger worth trying older version of parity (e.g. 1.8.x or 1.9.x or 1.10.x)
and see if there is any difference at all.
There was some change in code signing in 1.10 or in 1.9 (not sure though).

I had a lot of hope when you said that.

But I have tried some older versions now:

(run15) v1.7.13 and instantseal https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-15
(run16) v1.7.13 and aura https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-16
(run17) v1.8.11 and aura https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-17

not faster.

5chdn commented 5 years ago

not faster.

would be surprised if the speed varies across versions

drandreaskrueger commented 5 years ago

would be surprised if the speed varies across versions

yes me too.

but in lack of any other substantial suggestions, and as a test of his comment

@gituser
gituser commented 4 days ago •
@drandreaskrueger worth trying older version of parity (e.g. 1.8.x or 1.9.x or 1.10.x)
and see if there is any difference at all.
There was some change in code signing in 1.10 or in 1.9 (not sure though).

I had to try that.

drandreaskrueger commented 5 years ago

I could get contact with CodyBorn from Microsoft, he answered to my tweets.

I have summarized what he is revealing about his approach here: https://gitlab.com/electronDLT/chainhammer/blob/master/codyborn.md

drandreaskrueger commented 5 years ago

CodyBorn

What is clear already: He is "too far out" to be applicable to my simple benchmarking.

I don't feel like creating hundreds of sender-accounts just to bypass nonce lookup and then sign my own transactions.

And even if that made parity faster, for me it would simply mean that paritytech should revisit that part of your parity code, to accelerate it - and repeatedly re-run my chainhammer, to notice when you got it faster.

Because his approach with web3.eth.sendRawTransaction() would not be practical for our daily use of parity; we want the Ethereum node to do that work for us efficiently, so we can use eth_sendTransaction(). Edit: Like on geth or quorum.

possibly the most important hint for paritytech

it is probably something outside of sendRawTransaction() but within the parity code base which is slowing down transactions by a factor of more than 500%.

the most important experiment that CodyBorn could do:

replicate his exact approach but instead of parity aura using geth clique or quorum IBFT, see geth.md#results, and quorum-IBFT.md#on-amazon-aws ...


EDIT: ... the latter there contains my best result so far: 524 TPS on average when sending 20k tx into 1 node on a quorum-crux-IBFT network of 4 nodes, on an Amazon c5.4xlarge instance. That's it, from me, for now - until anyone makes any better suggestions for parity.

drandreaskrueger commented 5 years ago

parity v2.2.0 (single-threaded) seems slightly faster than v1.11.11 (multi-threaded), see these brand new results: https://github.com/paritytech/parity-ethereum/issues/9582#issuecomment-427144992