Closed Kagami closed 9 years ago
So it's pybitmessage = fastcpu*3
, c-singlethread = pybitmessage*1.5
, js-singlethread-chrome = pybitmessage*42
, js-workers-chrome = pybitmessage*9
.
Also, this example uses very simple nonce (22M dhashes), it's very common to have x2 and x4 nonces (30-60 seconds in PyBitmessge respectfully).
So, given this results, even simple POW are very hard to calculate in browser environment, even with fastest browser, fastest SHA-512 implementation and 8-core CPU.
The solution might be to calculate POW right on websocket relays which are required anyway. Given the fact that most cheap VPS only have 1 core available relays should use rather hard send limits. The average limit might be 1 message per 5 minutes for single client.
On the other hand, for TTL=86400
(1 day, 28 times smaller) results are rather appropriate:
runPow()
undefined
index.browserify.js:17 pow: 19546.343ms
index.browserify.js:18 Result nonce: 3122437
(20 seconds)
pow: timer started index.browserify.js:5
pow: 33294.94ms index.browserify.js:17
"Result nonce: 3122437" index.browserify.js:18
(33 seconds)
So using small ttls may be good enough for beta web client implementations. We may think how to handle it more gracefully later.
Comparision of POW speed in different environments
Implementation sources: https://gist.github.com/Kagami/e15186b5d73224ca8c47
POW
fastcpu (OpenSSL, 8 threads)
(5 seconds)
PyBitmessage (8 processes)
(15 seconds)
C (OpenSSL, 1 thread)
(20 seconds)
JS (sha.js, 8 web workers)
Chrome
(132 seconds, 2.25 minutes)
Firefox
(230 seconds, 4 minutes)
JS (sha.js, 1 thread)
Chrome
(633 seconds, 10.5 minutes)