Open jon4hz opened 1 year ago
You've been very busy, do you even sleep sometimes lol
Little clarification though, when i use geth
and miningcore
on the same server/machine
and i set dagDir
to the same location where geth
automatically generates the DAG files, miningcore
actually uses instantly the DAG files already generated and does not generate them again.
What's that sleep you're talking about?
From an operations perspective it often doesn't make sense to run geth and miningcore on the same host. And especially when you are deploying miningcore in a cloud environment like Kubernetes, AWS ECS or whatever fancy solution there is, having miningcore to generate that DAG at each start simply doesn't allow you to run miningcore at scale. This has been a thorn in my side for some time.
Also I think miningcore will generate DAGs anyway, it has a future cache which stores the DAG for the next epoch and I don't think it can pick that up from a pregenerated file. While mc does that one of your cores kinda goes to waste.
Also I think miningcore will generate DAGs anyway, it has a future cache which stores the DAG for the next epoch and I don't think it can pick that up from a pregenerated file. While mc does that one of your cores kinda goes to waste.
You are definitely right about the different architectures, i don't generate enough money yet to just even think about running at that scale when you can afford a server for every single use-cases: nodes
, webserver
, stratum-pool
, database (master+slave)
.
Also geth
always generates two DAG files: current
& future
epochs, and trust me miningcore
pick them right away since geth
named these files exactly like in the code inside miningcore
, i tested it multiple times by deleting the files and restarting the node:
ls -allh /home/ceedii/.etchash/
total 4.2G
drwxr-xr-x 2 ceedii ceedii 4.0K Feb 1 14:38 .
drwxr-xr-x 98 ceedii ceedii 4.0K Jan 31 11:34 ..
-rw-r--r-- 1 ceedii ceedii 2.1G Feb 1 14:38 full-R23-12d4e805cd0b1eb4
-rw-r--r-- 1 ceedii ceedii 2.1G Jan 28 02:43 full-R23-abcd0f131d37fb4c
I never ran so far into DAG generation despite the DEBUG
info:
[2023-02-01 14:39:20.1673] [I] [etc1] All daemons online
[2023-02-01 14:39:20.1673] [I] [Core] API Access to /metrics restricted to 127.0.0.1,::1,::ffff:127.0.0.1
[2023-02-01 14:39:20.1736] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.BlockFoundNotification:
[2023-02-01 14:39:20.1736] [I] [etc1] All daemons synched with blockchain
[2023-02-01 14:39:20.1736] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.BlockUnlockedNotification:
[2023-02-01 14:39:20.1736] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.BlockConfirmationProgressNotification:
[2023-02-01 14:39:20.1736] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.NewChainHeightNotification:
[2023-02-01 14:39:20.1761] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.PaymentNotification:
[2023-02-01 14:39:20.1761] [D] [MessageBus] Listening to Miningcore.Notifications.Messages.HashrateNotification:
[2023-02-01 14:39:20.2055] [I] [etc1] Loading current DAG ...
[2023-02-01 14:39:20.2089] [D] [etc1] Epoch length used: 60000
[2023-02-01 14:39:20.2089] [I] [etc1] No pre-generated DAG available, creating new for epoch 135
[2023-02-01 14:39:20.2089] [I] [etc1] Generating DAG for epoch 135
[2023-02-01 14:39:20.2089] [D] [etc1] Epoch length used: 60000
[2023-02-01 14:39:20.2423] [I] [Core] API access limited to 30 requests per 1s, except from 192.168.1.4
[2023-02-01 14:39:20.2512] [I] [Lifetime] Now listening on: http://0.0.0.0:4000
[2023-02-01 14:39:20.2512] [I] [Lifetime] Application started. Press Ctrl+C to shut down.
[2023-02-01 14:39:20.2512] [I] [Lifetime] Hosting environment: Production
[2023-02-01 14:39:20.2512] [I] [Lifetime] Content root path: /home/ceedii/miningcore/build/
[2023-02-01 14:39:21.1148] [I] [etc1] Done generating DAG for epoch 135 after 00:00:00.9041033
[2023-02-01 14:39:21.1178] [I] [etc1] Loaded current DAG
[2023-02-01 14:39:21.1289] [I] [etc1] Job Manager Online
[2023-02-01 14:39:21.2805] [I] [etc1] New work at height 8105053 and header 0xc79f231f859b65d6dccf127dbd283617d797154716740b7a93ba04712ea17e5b via [POLL]
[2023-02-01 14:39:21.2805] [I] [etc1] Broadcasting job 00000001
[2023-02-01 14:39:21.2902] [D] [etc1] Loading pool stats
[2023-02-01 14:39:21.3582] [I] [etc1] Pool Online
[2023-02-01 14:39:21.3582] [I] [etc1]
Mining Pool: etc1
Coin Type: ETC [ETC]
Network Connected: Mordor-Mordor
Detected Reward Type: POW
Current Block Height: 8105053
Current Connect Peers: 1
Network Difficulty: 283.02K
Network Hash Rate: 18.95 KH/s
Stratum Port(s): 3394, 3395
Pool Fee: 0.1%
[2023-02-01 14:39:21.3657] [I] [etc1] Stratum ports 0.0.0.0:3394, 0.0.0.0:3395 online
[2023-02-01 14:39:32.1512] [I] [etc1] New work at height 8105054 and header 0x556e7d9a197adfb191c725c60b06fa319bbbac327bd0d002cd5c9843a1b6db8d via [POLL]
[2023-02-01 14:39:32.1565] [I] [etc1] Broadcasting job 00000002
[2023-02-01 14:39:35.0212] [I] [StatsRecorder] [etc1] Updating Statistics for pool
[2023-02-01 14:39:35.0852] [I] [StatsRecorder] Performing Stats GC
[2023-02-01 14:39:35.1398] [I] [StatsRecorder] [etc1] Reset performance stats for pool
[2023-02-01 14:39:35.1508] [I] [StatsRecorder] Stats GC complete
[2023-02-01 14:39:48.6614] [I] [etc1] New work at height 8105055 and header 0x54b2312392beaaf630bce265dd0f81581ee7375f83fd1bd32b5a709451d8b69e via [POLL]
[2023-02-01 14:39:48.6678] [I] [etc1] Broadcasting job 00000003
[2023-02-01 14:39:51.3906] [D] [etc1] Vardiff Idle Update pass begins
[2023-02-01 14:39:51.4000] [D] [etc1] Vardiff Idle Update pass ends
What's that sleep you're talking about?
From an operations perspective it often doesn't make sense to run geth and miningcore on the same host. And especially when you are deploying miningcore in a cloud environment like Kubernetes, AWS ECS or whatever fancy solution there is, having miningcore to generate that DAG at each start simply doesn't allow you to run miningcore at scale. This has been a thorn in my side for some time.
Yeah, hate to say it but MC's Dag always sucked 👎
Really need some consolidated feedback on this one as I currently lack the infrastructure to give the PR the thorough testing it deserves.
I run miningcore on docker swarm, on a ditributed architecture, and I used the light-cache branch of @jon4hz to deploy my etc-pool. It was a headache to setup a shared volume for the dag between nodes, and the fact is open-etc-pool was easier to deploy as it uses a light cache. With this feature, I can stick with miningcore :)
I haven't had time to give it a try, i'm under with my daily job sadly but i will definitely give it a go as soon as possible :)
@Censseo Even without that feature, i could not even think about using open-etc-pool
, i just can't stand the nodejs
realm lol But very please to hear @jon4hz solution is working properly for you :rocket:
I can understand that but open-etc-pool is written in go 😂 anyway, so far so good, no error from the miningcore part since the launch of the pool, but still no block ^^'
I can understand that but open-etc-pool is written in go joy anyway, so far so good, no error from the miningcore part since the launch of the pool, but still no block ^^'
lol my bad, it's conceal
which uses nodejs
lol
Why are you not testing on ETC Mordor, it will be way more easy to find blocks
and find issues lol Mainnet is really brutal, you will need lot of hash-rate power lol
I have been testing the code on Mordor
and so far so good, i will continue and report if i meet any :+1:
I can understand that but open-etc-pool is written in go joy anyway, so far so good, no error from the miningcore part since the launch of the pool, but still no block ^^'
lol my bad, it's
conceal
which usesnodejs
lolWhy are you not testing on ETC Mordor, it will be way more easy to find
blocks
and find issues lol Mainnet is really brutal, you will need lot of hash-rate power lolI have been testing the code on
Mordor
and so far so good, i will continue and report if i meet any 👍
Sorry if I wasn't clear, I obsviouly tested on mordor before going to mainet 😂 I hate surprises in PROD haha But yeah for the moment our hashrate is really low due to the low profitability of the coin here in France, and of the cost of electricity, so we might take time to take off in hashrate.
I have the ETC up and running. Tested it for 4 hours with 0.25 TH/S to see how it went. Got two blocks with no bugs or errors. The other thing I noticed was the payments were processing properly now as well.
I checked one ethash works but the ping jumps a lot, on my server the delay should be 12ms but with light cache min 18ms - max 35ms
Hmm that's odd. I can't think of anything I changed that would affect general latency. How did you measure the latency? Simply icmp?
it's just looking in the miner
It would be interesting how the metrics compare. Only the output from the miner isn't really a reliable indicator because it also contains network latency, etc.
But generally speaking I think it makes sense that hashing takes a little longer. From what I understand, the light cache generates the required data on the fly while the full DAG has everything precalculated. A few ms should be ok imo but not too much :sweat_smile:.
this is all of course not very important, it still works fine, no need to wait for the generation of the dag. I use https://github.com/sencha-dev/powkit in the open-ethereum-pool and there, with the first solution, 2 .powcache/cache-ETC-R23-067cd4150c639af0 is generated about 54MB and they are checked.
I had some spare time to run a few benchmarks and the results are quite interesting. First of all some disclosure: that was the first time I did some benchmarks on dotnet, so please take a look and correct me if I did any mistakes.
I tested the following things:
The fasted method (by far) was miningcore using the full dag. Miningcore with the light cache and go-etchash were slower, however the miningcore light cache implementation was slightly faster than go-etchash.
The table below shows the average time it took to calculate the hash.
MC Dag | MC Light | go-etchash |
---|---|---|
1.637 μs | 760.9 μs | 865.177 μs |
Miningcore.Tests.Benchmarks.Crypto.EthashBenchmarks-report_full.csv Miningcore.Tests.Benchmarks.Crypto.EthashBenchmarks-report_light.csv
goos: linux
goarch: amd64
pkg: github.com/etclabscore/go-etchash
cpu: AMD Ryzen 9 5950X 16-Core Processor
BenchmarkLight
BenchmarkLight-32 1400 865177 ns/op 18825 B/op 270 allocs/op
PASS
ok github.com/etclabscore/go-etchash 1.311s
Miningcore DAG: https://github.com/jon4hz/miningcore-foss/commit/72d6ec93a465108338942c866d90c811a02c43d6 Miningcore Light Cache: https://github.com/oliverw/miningcore/pull/1608/commits/635ae9d00deb7a36b1d9b38d11941234f86358f3 Go-etchash:
func BenchmarkLight(b *testing.B) {
testHeight := uint64(60000)
testHash := common.HexToHash("5fc898f16035bf5ac9c6d9077ae1e3d5fc1ecc3c9fd5bee8bb00e810fdacbaa0")
testNonce := uint64(0x50377003e5d830ca)
var ecip1099FBlockClassic uint64 = 11700000
var hasher = New(&ecip1099FBlockClassic, nil)
for i := 0; i < b.N; i++ {
epochLength := calcEpochLength(testHeight, hasher.Light.ecip1099FBlock)
epoch := calcEpoch(testHeight, epochLength)
cache := hasher.Light.getCache(testHeight)
dagSize := datasetSize(epoch)
_, _ = cache.compute(uint64(dagSize), testHash, testNonce)
}
}
As you can see, using the light cache is quite a bit slower than using the full dag. However this light cache implementation is still a bit faster than the implementation most ethereum pools use - at least the ones based on open-ethereum-pool.
When I was operating an ethereum pool, the dag generation was always extremely annoying, took ages on a single core and effectively prevent miningcore from horizontal scaling, because a new instance took ~30min to start up.
That's why I'd prefer having the light cache only, even if it means to sacrifice some performance.
What's your opinion on this?
And last but not least, what do you think about having both options in miningcore? It should be possible to add a config option, so the user can choose if they want to generate the full dag or only use the light cache depending on their needs.
Also tested this library suggested by @Konstantin35 and it's approximately the same as go-etchash.
func BenchmarkLight(b *testing.B) {
testHeight := uint64(60000)
testHash := testutil.MustDecodeHex("5fc898f16035bf5ac9c6d9077ae1e3d5fc1ecc3c9fd5bee8bb00e810fdacbaa0")
testNonce := uint64(0x50377003e5d830ca)
client := NewEthereum()
for i := 0; i < b.N; i++ {
client.Compute(testHash, testHeight, testNonce)
}
}
goos: linux
goarch: amd64
pkg: github.com/sencha-dev/powkit/ethash
cpu: AMD Ryzen 9 5950X 16-Core Processor
BenchmarkLight
BenchmarkLight-32 1368 892694 ns/op 19222 B/op 271 allocs/op
PASS
ok github.com/sencha-dev/powkit/ethash 1.317s
I think you should just let the user decides if she/he wants to use the full DAG
or the light DAG
.
Having more options is always better on an usability stand-point.
In my case scenario for example: miningcore
and nodes
running on the same server, the nodes are just running on dedicated SSDs, the full DAG
is just the perfect choice.
So i'm totally agree that miningcore
should provide both choices :)
true, it depends of the infrastructure of the pool, some are just doing old good servers with node and pool on the same machine, so it can use the dag generated by the node. Good job for the benchmark, I have to admit I was lazy to do it on the fly as I was committed on other things. Interesting results indeed, but I am rdy to let go some performance for the scaling, and I think these performances issues shouldn't be a pb for most unless getting in the top 3 pools with major workloads.
I think you should just let the user decides if she/he wants to use the
full DAG
or thelight DAG
. Having more options is always better on an usability stand-point. In my case scenario for example:miningcore
andnodes
running on the same server, the nodes are just running on dedicated SSDs, thefull DAG
is just the perfect choice. So i'm totally agree thatminingcore
should provide both choices :)
i went through this code and attempted some of my own tests. I'm confused on how you get geth to generate the dag file without the geth node having to allocate a useless thread to start mining.
I think you should just let the user decides if she/he wants to use the
full DAG
or thelight DAG
. Having more options is always better on an usability stand-point. In my case scenario for example:miningcore
andnodes
running on the same server, the nodes are just running on dedicated SSDs, thefull DAG
is just the perfect choice. So i'm totally agree thatminingcore
should provide both choices :)i went through this code and attempted some of my own tests. I'm confused on how you get geth to generate the dag file without the geth node having to allocate a useless thread to start mining.
It's because currently in production, i'm still using the old legacy ethash
code from miningcore
not the evolution from @jon4hz, which is working fine on my testnet
environment so far.
But in my situation, miningcore
and geth
running on the same server (nodes running on a dedicated ssd), the old code works just fine - https://github.com/oliverw/miningcore/discussions/1586 - geth
actually generates the DAG
files and i pointed the cache dir folder option to that location, miningcore
does not generate the files since they already exist.
I think you should just let the user decides if she/he wants to use the
full DAG
or thelight DAG
. Having more options is always better on an usability stand-point. In my case scenario for example:miningcore
andnodes
running on the same server, the nodes are just running on dedicated SSDs, thefull DAG
is just the perfect choice. So i'm totally agree thatminingcore
should provide both choices :)i went through this code and attempted some of my own tests. I'm confused on how you get geth to generate the dag file without the geth node having to allocate a useless thread to start mining.
It's because currently in production, i'm still using the old legacy
ethash
code fromminingcore
not the evolution from @jon4hz, which is working fine on mytestnet
environment so far.But in my situation,
miningcore
andgeth
running on the same server (nodes running on a dedicated ssd), the old code works just fine - #1586 -geth
actually generates theDAG
files and i pointed the cache dir folder option to that location,miningcore
does not generate the files since they already exist.
when you run geth, to get geth to generate the dag file you have to run miner.start(1)
. which wastes a cpu while the node attempts to cpu mine... unless you know another command that i do not know.
I think you should just let the user decides if she/he wants to use the
full DAG
or thelight DAG
. Having more options is always better on an usability stand-point. In my case scenario for example:miningcore
andnodes
running on the same server, the nodes are just running on dedicated SSDs, thefull DAG
is just the perfect choice. So i'm totally agree thatminingcore
should provide both choices :)i went through this code and attempted some of my own tests. I'm confused on how you get geth to generate the dag file without the geth node having to allocate a useless thread to start mining.
It's because currently in production, i'm still using the old legacy
ethash
code fromminingcore
not the evolution from @jon4hz, which is working fine on mytestnet
environment so far. But in my situation,miningcore
andgeth
running on the same server (nodes running on a dedicated ssd), the old code works just fine - #1586 -geth
actually generates theDAG
files and i pointed the cache dir folder option to that location,miningcore
does not generate the files since they already exist.when you run geth, to get geth to generate the dag file you have to run
miner.start(1)
. which wastes a cpu while the node attempts to cpu mine... unless you know another command that i do not know.
I just run the following command-line:
geth --mordor --config /home/ceedii/.ethereum/mordor.toml --http --mine --miner.threads 1 --unlock 0x421Afb2ce225D3A2d3DD6e63Fe57E124B40e20Af --password /home/ceedii/.ethereum/mordor/help/pool.password --allow-insecure-unlock --verbosity 4 --log.debug > /home/ceedii/.ethereum/mordor.log 2>&1
And my mordor.toml
looks something like that:
# Note: this config doesn't contain the genesis block.
[Eth]
NetworkId = 7
SyncMode = "snap"
EthDiscoveryURLs = ["enrtree://AJE62Q4DUX4QMMXEHCSSCSC65TDHZYSMONSD64P3WULVLSF6MRQ3K@les.mordor.blockd.info"]
SnapDiscoveryURLs = ["enrtree://AJE62Q4DUX4QMMXEHCSSCSC65TDHZYSMONSD64P3WULVLSF6MRQ3K@les.mordor.blockd.info"]
NoPruning = false
NoPrefetch = false
TxLookupLimit = 2350000
LightPeers = 100
UltraLightFraction = 75
DatabaseCache = 512
DatabaseFreezer = ""
TrieCleanCache = 153
TrieCleanCacheJournal = "triecache"
TrieCleanCacheRejournal = 3600000000000
TrieDirtyCache = 256
TrieTimeout = 3600000000000
SnapshotCache = 102
Preimages = false
EnablePreimageRecording = false
EWASMInterpreter = ""
EVMInterpreter = ""
RPCGasCap = 50000000
RPCEVMTimeout = 5000000000
RPCTxFeeCap = 1e+00
[Eth.Miner]
Etherbase = "0x421afb2ce225d3a2d3dd6e63fe57e124b40e20af"
GasFloor = 0
GasCeil = 30000000
GasPrice = 1000000000
Recommit = 3000000000
Noverify = false
[Eth.Ethash]
CacheDir = "etchash"
CachesInMem = 2
CachesOnDisk = 3
CachesLockMmap = false
DatasetDir = "/home/ceedii/.etchash"
DatasetsInMem = 1
DatasetsOnDisk = 2
DatasetsLockMmap = false
PowMode = 0
NotifyFull = false
[Eth.TxPool]
Locals = []
NoLocals = false
Journal = "transactions.rlp"
Rejournal = 3600000000000
PriceLimit = 1
PriceBump = 10
AccountSlots = 16
GlobalSlots = 5120
AccountQueue = 64
GlobalQueue = 1024
Lifetime = 10800000000000
[Eth.GPO]
Blocks = 2
Percentile = 60
MaxHeaderHistory = 300
MaxBlockHistory = 5
MaxPrice = 500000000000
IgnorePrice = 2
[Node]
UserIdent = "0x421Afb2ce225D3A2d3DD6e63Fe57E124B40e20Af"
DataDir = "/home/ceedii/.ethereum/mordor"
SmartCardDaemonPath = "/run/pcscd/pcscd.comm"
IPCPath = "geth.ipc"
HTTPHost = ""
HTTPPort = 8545
HTTPVirtualHosts = ["localhost"]
HTTPModules = ["net", "web3", "eth"]
AuthAddr = "localhost"
AuthPort = 8551
AuthVirtualHosts = ["localhost"]
WSHost = ""
WSPort = 8546
WSModules = ["net", "web3", "eth"]
GraphQLVirtualHosts = ["localhost"]
[Node.P2P]
MaxPeers = 25
NoDiscovery = true
DiscoveryV5 = true
BootstrapNodes = ["enode://534d18fd46c5cd5ba48a68250c47cea27a1376869755ed631c94b91386328039eb607cf10dd8d0aa173f5ec21e3fb45c5d7a7aa904f97bc2557e9cb4ccc703f1@51.158.190.99:30303", "enode://15b6ae4e9e18772f297c90d83645b0fbdb56667ce2d747d6d575b21d7b60c2d3cd52b11dec24e418438caf80ddc433232b3685320ed5d0e768e3972596385bfc@51.158.191.43:41235", "enode://8fa15f5012ac3c47619147220b7772fcc5db0cb7fd132b5d196e7ccacb166ac1fcf83be1dace6cd288e288a85e032423b6e7e9e57f479fe7373edea045caa56b@176.9.51.216:31355", "enode://34c14141b79652afc334dcd2ba4d8047946246b2310dc8e45737ebe3e6f15f9279ca4702b90bc5be12929f6194e2c3ce19a837b7fec7ebffcee9e9fe4693b504@176.9.51.216:31365"]
BootstrapNodesV5 = ["enode://534d18fd46c5cd5ba48a68250c47cea27a1376869755ed631c94b91386328039eb607cf10dd8d0aa173f5ec21e3fb45c5d7a7aa904f97bc2557e9cb4ccc703f1@51.158.190.99:30303", "enode://15b6ae4e9e18772f297c90d83645b0fbdb56667ce2d747d6d575b21d7b60c2d3cd52b11dec24e418438caf80ddc433232b3685320ed5d0e768e3972596385bfc@51.158.191.43:41235", "enode://8fa15f5012ac3c47619147220b7772fcc5db0cb7fd132b5d196e7ccacb166ac1fcf83be1dace6cd288e288a85e032423b6e7e9e57f479fe7373edea045caa56b@176.9.51.216:31355", "enode://34c14141b79652afc334dcd2ba4d8047946246b2310dc8e45737ebe3e6f15f9279ca4702b90bc5be12929f6194e2c3ce19a837b7fec7ebffcee9e9fe4693b504@176.9.51.216:31365"]
StaticNodes = []
TrustedNodes = []
ListenAddr = ":30303"
EnableMsgEvents = false
[Node.HTTPTimeouts]
ReadTimeout = 30000000000
WriteTimeout = 30000000000
IdleTimeout = 120000000000
[Metrics]
HTTP = "127.0.0.1"
Port = 6060
InfluxDBEndpoint = "http://localhost:8086"
InfluxDBDatabase = "geth"
InfluxDBUsername = "test"
InfluxDBPassword = "test"
InfluxDBTags = "host=localhost"
InfluxDBToken = "test"
InfluxDBBucket = "geth"
InfluxDBOrganization = "geth"
I imagine the line partially responsible for the DAG files are the following:
[Eth.Ethash]
CacheDir = "etchash"
CachesInMem = 2
CachesOnDisk = 3
CachesLockMmap = false
DatasetDir = "/home/ceedii/.etchash"
DatasetsInMem = 1
DatasetsOnDisk = 2
DatasetsLockMmap = false
PowMode = 0
NotifyFull = false
I just point miningcore
to the same directory and it does the trick.
Starting the geth client with --mine.threads 1
starts the cpu mining on the node and you lose a cpu core to the process. How disappointing.
Running 19 different stratums around the world, the latency for a submit hash to the light client, which then has to send it to a light server, which then has to put it on the chain for a 15 second block time seems like a great way to get uncles. I could be wrong.
Starting the geth client with
--mine.threads 1
starts the cpu mining on the node and you lose a cpu core to the process. How disappointing.Running 19 different stratums around the world, the latency for a submit hash to the light client, which then has to send it to a light server, which then has to put it on the chain for a 15 second block time seems like a great way to get uncles. I could be wrong.
Hey, i'm just describing a use case scenario here, i understand perfectly there are countless of different infrastructure out there lol 19 different stratums all around, what a "baller" you are :fire: :rocket: Like i said before, i think both options should remain available in order to cover most case-scenarios as possible.
Starting the geth client with
--mine.threads 1
starts the cpu mining on the node and you lose a cpu core to the process. How disappointing. Running 19 different stratums around the world, the latency for a submit hash to the light client, which then has to send it to a light server, which then has to put it on the chain for a 15 second block time seems like a great way to get uncles. I could be wrong.Hey, i'm just describing a use case scenario here, i understand perfectly there are countless of different infrastructure out there lol 19 different stratums all around, what a "baller" you are 🔥 🚀 Like i said before, i think both options should remain available in order to cover most case-scenarios as possible.
Sorry didn't mean for it to come across as an attack. Not my intention at all. Thank you sir for your suggestions... I agree with you both options should be set in the code.
Starting the geth client with
--mine.threads 1
starts the cpu mining on the node and you lose a cpu core to the process. How disappointing. Running 19 different stratums around the world, the latency for a submit hash to the light client, which then has to send it to a light server, which then has to put it on the chain for a 15 second block time seems like a great way to get uncles. I could be wrong.Hey, i'm just describing a use case scenario here, i understand perfectly there are countless of different infrastructure out there lol 19 different stratums all around, what a "baller" you are fire rocket Like i said before, i think both options should remain available in order to cover most case-scenarios as possible.
Sorry didn't mean for it to come across as an attack. Not my intention at all. Thank you sir for your suggestions... I agree with you both options should be set in the code.
No worry, it's all good. Have a great week ahead :+1:
Well bad news for me with that PR in core-geth
: https://github.com/etclabscore/core-geth/pull/499. I will not be able to enjoy the DAG files generated by core-geth anymore, since that commit - https://github.com/etclabscore/core-geth/pull/499/files#diff-81dd4f9d1a46a485d74affabbf00bcc61fd98a44bcf785a57d59d87ce50ddb93 - changes the DAG file naming, which no longer match the naming inside miningcore
. So miningcore
will always regenerate the DAG files for core-geth
> 1.12.8
Hi again, This PR does not only abstract the ethash algos, it also makes the DAG generation obsolete by only using a light cache only. I did some research an this is the same behavior as open-ethereum-pool has. Initially I was a bit concerned the this might have a performance impact, but even the official ethereum wiki mentions that the cache is enough for hash validation.
https://ethereum.org/en/developers/docs/consensus-mechanisms/pow/mining-algorithms/ethash/
The code was tested on mordor.
Related discussion: https://github.com/oliverw/miningcore/discussions/1586