JayDDee / cpuminer-opt

Optimized multi algo CPU miner
Other
763 stars 541 forks source link

--max-diff option works not as expected in solo mode #392

Closed YetAnotherRussian closed 1 year ago

YetAnotherRussian commented 1 year ago

Version is 3.21.1 cpuminer-avx2-sha-vaes.exe -t 8 --cpu-affinity 0x5555 -a algo -o http://127.0.0.1:54321 -u aaa -p bbb --max-diff=0.45

image

Seems this option makes sense, mining stops somehow on a high diff job, but then it continues.

I'm not sure if u got some solo env, gonna make detailed logs, if needed.

There's another micro issue nearby:

image

This demotivating block ttf on start makes people to close and forget rather than to wait for a correct stats :'( Better to slow it down a bit on start, but I'm not sure.

JayDDee commented 1 year ago

Suppresssing startup garbage is an optimization issue. I have chosen to ignore it rather than check it constantly and unnecessarilly.

I'll look into the max-diff issue, it should be obvious in the code. Edit: I don't see a problem, netdiff for block 5732040 is 0.67, max-diff= 0.45, so it should pause.

There's other bad stuff going on, share counter, stats not availailable...

Good timing, I have another release planned.

YetAnotherRussian commented 1 year ago

Edit: I don't see a problem, netdiff for block 5732040 is 0.67, max-diff= 0.45, so it should pause.

There is no actual "full stop until net diff decreases to 0.45 or less". This is obvious just by CPU load. E.g. I've found #5732052 but should not.

JayDDee commented 1 year ago

There's a bug in myr-gr for CPUs below AVX2. It doesn't submit shares properly. It's been that way for a long time. That's why the stats are messed. It also explains the obsolete "submitted by thread" log.

I'll take a look at the mechanics of max-diff.

Edit: It looks like only one thread is pausing, you can test this with -t 1. The other threads are in the loop apparently unaware and need a kick. New work solo is handled by the miner threads, stratum uses the stratum_thread to handle new work. This migh explain why the problem appears to be only solo. The message isn't getting to the other threads. I need to dig deeper.

JayDDee commented 1 year ago

The max-diff problem appears to also affect max-temp and others. Try adding: if ( !state ) restart_threads(); at the end of cpu-miner.c:wanna_mine. That will kick the other miner threads. I can do some basic testing with stratum but I can't test solo.

The myr-gr fix involves replacing a block of code in algo/groestl/myr-groestl.c/scanhash_myriad. It should look the same as algo/groestl/groestl.c/scanhash_groestl:

if (hash[7] <= Htarg ) if ( fulltest(hash, ptarget) && !opt_benchmark ) { pdata[19] = nonce; submit_solution( work, hash, mythr ); } /* delete
if (hash[7] <= Htarg && fulltest(hash, ptarget)) { pdata[19] = nonce; *hashes_done = pdata[19] - first_nonce; return 1; } */ I can test myr-gr if I can find a share at the pool, but with an unoptimized build it could take a long time. Edit: Testing myr-gr is futile in a pool, share TTF is 9 hours with i9-9940x 28 threads.

YetAnotherRussian commented 1 year ago

Testing myr-gr is futile in a pool, share TTF is 9 hours with i9-9940x 28 threads.

Try stratum+tcp://pool.cryptopowered.club:1304 using wallet GfWkqzKQfQDMQxjwi5iJCDbzhsCNxGLKHr and pass x, share TTF is ~30sec @ 15Mh

JayDDee commented 1 year ago

Both problems have fixes. There was another problem with conditional mining that resulted in resuming after the initial 5 second pause without rechecking the condition (ie max-diff). It also affected max-temp and max-hashrate. I also added a complementary resume log.

Next release.

JayDDee commented 1 year ago

There is a secondary issue with max-diff with stratum because the stratum server will timeout after 5 minutes with no shares and stop sending new blocks. The miner won't see new blocks with lower diff and never resume resulting in deadlock.

Adding --stratum-keeplalive option should prevent deadlock by resetting the stratum connection to start sending new blocks.

GBT/getwork should not be affected.

FYI

YetAnotherRussian commented 1 year ago

FYI

Okay, but I personally see no reason to use max-diff with stratum. With stratum mining you are not associated with net diff, that's among the points of protocol itself. If the client doesn't like stratum diff, better to use another port (with lower diff or vardiff) or pool.

I did a trick with my private build of gpu miner some years ago, when 2 instances mine together, one on pool and one solo. The point of that was like this: solo < maximum_acceptable_solo_diff_value <pool. It was all about pool fee economy. So, only one instance was actually mining at the moment.

There is no min-diff option in cpuminer-opt though.

JayDDee commented 1 year ago

Okay, but I personally see no reason to use max-diff with stratum. With stratum mining you are not associated with net diff, that's among the points of protocol itself. If the client doesn't like stratum diff, better to use another port (with lower diff or vardiff) or pool.

I disagree, max-diff is based on net diff, stratum diff means nothing.

YetAnotherRussian commented 1 year ago

I disagree, max-diff is based on net diff, stratum diff means nothing.

net diff 500Ph => job diff 5Mh net diff 500Gh => job diff 5Mh net diff 50Mh => job diff 5Mh

I don't know why should I take care of net diff... In solo mode TTF could change between seconds and years :D

JayDDee commented 1 year ago

Stratum diiff doesn't change profitability. When stratum diff changes the share value changes to offset the change in share rate. Net diff, block reward & exchange rate are the factors that determine profitability, whether mining solo or in a shared pool.

YetAnotherRussian commented 1 year ago

Stratum diiff doesn't change profitability.

Theoretically. Practically, no shares in a month is zero profitability (or no block in a moth, as you wish). Some guaranteed shares with low stratum diff in a month is non-zero profitability, if the pool is powerful enough to find blocks. If you multiply that by your 10-100-1000 workers or rigs, things should go even better. Not including luck factor, of course (some guys were lucky enough to find eth or btc block in solo and single gpu mining in our days).

Some pool ops do rob people with 3-5-10% fee :)

YetAnotherRussian commented 1 year ago

Today I also got this (just won't copycat issues):

image

image

image

This is argon2d4096

JayDDee commented 1 year ago

That's #379 & #389. I put that code in to detect potential segfaults before they occur. It's interesting it didn't crash. This means the loop following the. message was not optimized with AVX2 else it would have crashed I assume this is on Windows with the prebuilt binaries using gcc-9. The 2 crash issues were using gcc-11 on Linux.

The interesting part is the message confirms that address is only aligned to 16 bytes, AVX2 requires 32, and would have crashed with autovectoring. Also interesting is target is the first element of work struct and both the struct and target are supposed to be aligned to 64 bytes, enough for AVX512. The big question is why isn't it?

struct work { uint32_t target[8] __attribute__ ((aligned (64))); uint32_t data[48] __attribute__ ((aligned (64))); ...stuff deleted... } __attribute__ ((aligned (64)));

YetAnotherRussian commented 1 year ago

I assume this is on Windows with the prebuilt binaries using gcc-9

Yes

With argon2d4096 can reproduce 100%

   "target": "00000fffff000000000000000000000000000000000000000000000000000000",
      "curtime": 1678398092,
      "noncerange": "00000000ffffffff",
      "sigoplimit": 1000000,
      "bits": "1e0fffff"
   },
   "id": 0
}
[2023-03-10 00:41:32] Misaligned work->target 000000000427EF10

Affects only cpuminer-avx2-sha, cpuminer-avx2-sha-vaes and cpuminer-avx2 builds. Simple avx build is OK.

Another interesting thing is that only avx2-sha build got yellow affinity colours :D

image

JayDDee commented 1 year ago

Regarding the misalignment problem...

It's too bad you can't compile, another missed opportunity. More testing is required.

Until then I can only assume it would have crashed for you if compiled with gcc-11. Prior to gcc-11 the compiler didn't vectorize this loop. But that means there may be 2 compiler bugs:

  1. alignment attribute not working, gcc-9 & 11,
  2. autovectoring without checking for adequate data alignment or ensurring adequate alignment, gcc-11

Bug 1 seems to be present in both versions. The crash only occurs with gcc-11 due to more aggressive autovectoring and bug 2.

JayDDee commented 1 year ago

The affinity issue was also previously reported, I assume it may be related to Windows CPU groups but I have nothing more than that at tis time. Builds for older CPUs don't support Windows CPU groups and use the old method. CPU groups was never properly tested.

YetAnotherRussian commented 1 year ago

It's too bad you can't compile, another missed opportunity. More testing is required.

I can (and did it before in issues) @ Linux build-in env. Setting up gcc11 build env in Windows natively will be... omg. There's something like this https://sourceforge.net/projects/gcc-win64/files/12.2.0/ but it may lead to re-writing of cpuminer-opt...

JayDDee commented 1 year ago

Affects only cpuminer-avx2-sha, cpuminer-avx2-sha-vaes and cpuminer-avx2 builds. Simple avx build is OK.

Statistically there is a 50% chance the address will be aligned in any build.

JayDDee commented 1 year ago

I don't know anything about that sourceforge link but the MSys/MingW procedure is straightforward. https://github.com/JayDDee/cpuminer-opt/wiki/Compiling-from-source Scroll down to the easy way. Installing MSys takes some time but once setup it's a breeze.

About the affinity issue, try to see what triggers it and what CPU and core counts are involved. Also when the error occurs are all threads in error or just some?

YetAnotherRussian commented 1 year ago

Installing MSys takes some time but once setup it's a breeze.

So is GCC v 12.2 (and it's libs) suitable?

JayDDee commented 1 year ago

So is GCC v 12.2 (and it's libs) suitable?

Yes, I use it. I think it's the default install now. You can also install gcc-11 and g++-11. Those 2 additional packages should get everything you need to compile with gcc-11. To use the non-default version you can set the following env vars before building...

$ export CC=gcc-11 $ export CXX=g++-11

I appreciate the effort. It makes more work for me too but I don't like mysteries.

If you get msys up and running here's a suggested test plan:

You'll have to watch for a randomly aligned pointer, ie no misaligned log. If a particular build has a naturally aligned address it is useless for testing. Try different AVX2 build until you find one with the misaligned pointer. If you can't find one, particularly with gcc-12, it may be a sign of a fix. A newer gcc-11 may also have a fix. With 3 builds to choose from, if none of them are misaligned there's a 1 in 2**3 chance it's not fixed and is just luck.

  1. test with gcc-11. I expect a crash if the pointer is misaligned.
  2. test with gcc-12. Maybe it's fixed already.
  3. test with gcc-9 as a control. Expect no crash with misaligned pointer as initially reported.
JayDDee commented 1 year ago

Some other notes about affinity. I have a Windows10 with 8 thread CPU with cpu groups enabled and haven't seen this error. Did you configure CPU groups differently or do you have a setup, like dual socket, Windows Enterprise, NUMA, that may have a different default CPU group configuration? This is going beyond my knowledge so I don't know how far I can take it.

Edit: I'm looking for a debug log describing the number of cpu groups found but I don't see it in any of your posts. It requires -D.

Edit2: It appears there have been changes made to CPU groups in Win11. Are you using Win11? https://bitsum.com/general/the-64-core-threshold-processor-groups-and-windows/

YetAnotherRussian commented 1 year ago

My system is a single-CPU one, Win 10 Pro 21H1.

CPU and mb: image image image image

Nothing special (yes, that CPU is a real rarity, but is basially 5700G with lower TDP, slightly lower multi-core performance and a bit better single-core performance - check here in case of any questions https://www.cpubenchmark.net/compare/4323vs4387/AMD-Ryzen-7-5700G-vs-AMD-Ryzen-7-5700GE).

Groups in cpuminer-opt: image

I see some changes here https://learn.microsoft.com/en-us/windows/win32/procthread/numa-support (scroll down to "Behavior starting with Windows 10 Build 20348" section), but mine is 19043

JayDDee commented 1 year ago

The debug log I'm looking for is still missing. It should be displayed before the cpu capabiliilties. If you get "Binding thread n to cpu n in group n" you should have gotten "Found n cpus in cpu group n". You get one but not the other. This should be something I can reproduce, I'll play with it a bit to see if I can.

Edit: I did a quick test and I reproduced the log problem but not the affinity error. It seems that's a seperate problem. Do you have any insight when the error occurs and when it doesn't? Your CPU isn't unusually big so I don't see an obvious reason why the error hasn't been seen before.

YetAnotherRussian commented 1 year ago

I don't see an obvious reason why the error hasn't been seen before

As for me, I do not usually bruteforce an optimal build, so it's just no way I use cpuminer-avx2-sha build if cpuminer-avx2-sha-vaes one works okay.

The debug log I'm looking for is still missing. It should be displayed before the cpu capabiliilties. If you get "Binding thread n to cpu n in group n" you should have gotten "Found n cpus in cpu group n". You get one but not the other. This should be something I can reproduce, I'll play with it a bit to see if I can.

I've never ever seen "Found n cpus in cpu group n" even with "-D"

I think you mean this:

image

If it's not shown with such a CLI string, then something is not defined, ripped-off by compiler or log level.

image

I've compiled from git in Linux subsystem - no such message:

image

JayDDee commented 1 year ago

The missing log doesn't seem to be related to the affinity errors, it just provides more details about number of groups and group size. The error when setting affinity is the real problem, it hasn't been reported before.

JayDDee commented 1 year ago

I think we may be losing focus on the issues discussed here, there are so many. I'll summarise.

  1. max-difff broken, fix is ready
  2. myr-gr broken stats with old build, fix is ready
  3. Misaligned work pointer using GBT, not reproduceable, need more data
  4. Affinity not working on Windows with CPU groups enabled, not reproduceable, need more data
  5. Missing debug log on Windows with CPU groups, reproduced, I will folllow up.

3 & 4 is where I need help. I can't reproduce those errors.

YetAnotherRussian commented 1 year ago

3 is on a small pause, I cannot sync that wallet since today :( I got another one, but it should take up to a day to sync...

To reproduce 4, launch cpuminer-avx2-sha-vaes.exe with -t 8 --debug --cpu-affinity 0x5555 -a sha256d --benchmark and see:

image

Good.

Then, launch cpuminer-avx2-sha.exe with -t 8 --debug --cpu-affinity 0x5555 -a sha256d --benchmark and see: image

Bad.

You won't need wallets, solo etc. for a bench. Are you able to reproduce on your Win version & env?

UPD: I'm currently syncing to get 3.

JayDDee commented 1 year ago

I have an update on the missing log. It's so early in main the command line hasn't been parsed yet so debug isn't set yet. Otherwise num-cpus & numcpugroups is correct. You can get the log by commenting out the opt-debug check.

I can't reproduce 4, I don't get affinity errors. I presume in your "bad" test the errors were displayed. The graphs suggest that. I assume all threads reported the error. With the same command line with 2 slightly different builds you get different results. What does the cpuminer-avx2 do? You can also try avx512, it won't get very far before crashing but should get the miner threads started.

You reproduced it with avx2-sha but not avx2-sha-vaes. That's a very subtle distinction, I don't see a connection. Can you test with the debug condition removed to force the log so we can see the group & cpu count when the problem occurs?

YetAnotherRussian commented 1 year ago

You reproduced it with avx2-sha but not avx2-sha-vaes. That's a very subtle distinction, I don't see a connection. Can you test with the debug condition removed to force the log so we can see the group & cpu count when the problem occurs?

I can't reproduce with any other build rather than av2x-sha (which has 100% failure launches)

Without -D:

image

I got an idea to build @ Linux using your scripts, to get both builds.

JayDDee commented 1 year ago

Maybe a corrupt file? Try your own compile, download a different version. Never mind, you already suggested that.

YetAnotherRussian commented 1 year ago

AVX2 AES SHA: AMD Zen1 CFLAGS="-march=znver1 $DEFAULT_CFLAGS" ./configure $CONFIGURE_ARGS cpuminer-avx2-sha

Linux build with "-O3 -march=znver1" is okay, but I cannot build with just "-march=znver1" as it is in your winbuild shell script. That's the problem:

image

If I drop "-O3", I get:

image

GCC is 9.3.0

If I set znver2, I get the same.

JayDDee commented 1 year ago

If you're using msys2 use build-msys2.sh as a guide. winbuild.sh is used only for the binary download package built on a Linux host. build-allarch.sh has a big selection of architecture options you can use.

The error when compilng below -O3 is known and has to do with loop unrolling, which turns the variable into an immediate. -O3 is required.

YetAnotherRussian commented 1 year ago

So, Linux build is OK. I've downloaded your win builds again, and got the same problem. Archive CRC check is OK.

image

Got it! Version 3.19.0 introduced this problem for zen build, which was renamed to avx2-sha in later releases.

image

And, v3.18.2 is OK:

image

CPU load @ task manager confirms that affinity in v3.18.2 was correct in zen build.

Hope that helps.

JayDDee commented 1 year ago

Great work, I broke the build trying to support Alderlake and Zen3 with the same build. I can take it from here.

Just a note, build.sh won't include cpu groups, build-msys2-sh will do that. build-msys2.sh also has another option that may not be needed and can be removed if it compiles ok without it.

Edit: I have a fix for the missing log. Count the cpus and groups before parsing the command line and display the result afterward once opt_debug has been set.

JayDDee commented 1 year ago

The affinity issue can probably be closed, however, I'm curious how the build broke cpu groups. Which parameter was changed by the build and what was it's value? This is not important, the crash is the priority now.

Edit: it should also wait for a new release with the updated cpu group logs.

YetAnotherRussian commented 1 year ago

I've just synced. Other coin, other wallet.

There's an error only when segwit info logged.

image

JayDDee commented 1 year ago

Here are a couple of options I'm considering for the zen3 cpu group issue:

  1. Add a zen3 build to the package, that would bring the total to 9.
  2. Build the package without cpu group support, require users to compile it

Number 2 isn't so bad, anyone with more than 64 cpus and requires cpu groups should be compiling their own anyway. The binary package is more attractive to the less serious miners and old tech.

Just saw your latest update. I need to go out for a while, will look into it upon return, looks interesting.

YetAnotherRussian commented 1 year ago

image

Please don't look at "segwit not enabled", it's me who wrote it...

JayDDee commented 1 year ago

Can you clarify. It looks like a segfault when built with gcc-9, that's unexpected, the loop shouldn't be optimized. Are you saying that it crashes only with segwit enabled or that the misaligned address is only when segwit is enabled?

YetAnotherRussian commented 1 year ago

Are you saying that it crashes only with segwit enabled or that the misaligned address is only when segwit is enabled?

First of all, I've disabled segwit to check it, and put that message in code. Segwit status doesn't affect this crash.

That's the only gcc version I currently have. Got some plans to install Ubuntu 22 later tomorrow, and the latest gcc with it.

Which version do you use for public win builds?

JayDDee commented 1 year ago

Are you testing on native linux or Msys2, or maybe WSL? You can install multiple versions of gcc, no need to install a new OS.

The binarties package uses gcc-9 and your initial post of the misalignment was with gcc-9, displayed a misaligned log, but didn't crash. Now with gcc-9 compiled by you it crashes. That was unexpected. Maybe your compile resulted in optimizing the for loop but the prebuilt binaries didn't.

I couldn't find any connection to segwit so that idea is dropped.

Edit: The following is out of date, the pointer is not being overwritten, it's just a different pointer that was dynamically allocated by the workio thread. There's no guarantee it's aligned. The tests below in this comment may confirm it. See further updates below for the latest.


One thought is that the target pointer is being overwritten somewhere. Maybe the compiler has set it aligned but it gets overwritten with a misaligned pointer. I'll have do an exhaustive search of the code. You can check it by adding the following in main near the top before anything else happens:

if ( (uint64_t)(g_work->target) % 32 ) applog( LOG_ERR, "Startup misaligned g_work->target %p", g_work->target ); else applog( LOG_ERR, "startup ALIGNED g_work->target %p", g_work->target );

Corrected: only g_work is visible in main, each miner thread has a local work. You can put the same code using work.target in the miner threads near the top.

Second correction: miner threads use work.target, not "->".

Third: add thread id to miner thread log:

if ( (uint64_t)(work.target) % 32 ) applog( LOG_ERR, "Thread %d misaligned work.target %p", thr_id, work.target ); else applog( LOG_ERR, "Thread %d ALIGNED work.target %p", thr_id, work.target );

Update: I searched the code and found nowhere that the target pointer is overwritten. g_work and each instance of work will have their own pointer but they should all be aligned. The target referenced in gbt_work_decode should be traced back to one of those pointers. Testing with one thread should reduce noise.

Update: The different behaviour using the same compiler may be due to the build achitecture. I assume you compiled for zen3 but the prebuilt binaries (the ones that broke cpu groups on zen3) are built using a generic AVX2 architecture. Maybe that generic achitecture doesn't optimize the for loop and is the trigger between the misaligned log and the crash. Unfortunately that doesn't help with the root issue.

JayDDee commented 1 year ago

I've made a decision about including cpu group support in the Windows binaries package. I won't.

CPU architectures have become too many with the array of features and with Intel and AMD requiring different builds with essentially the same features. The problem likely affects zen4/avx512-sha-vaes also.

The binaries are intended more for casual miners, more professional miners should want to compile themselves for their own specific architecture.

The binaries without cpu groups will support cpus up to 64 threads. Current desktop cpus currently top out at 32 threads. Higher than that requires Xeon or Threadripper.

JayDDee commented 1 year ago

I may have found the bug causing the misalignment. The copy of work read by gbt_work_decode is dynamically allocated therefore may not be aligned. I need to dig deeper.

I have fix to test. The crash traceback is

gbt_work_decode get_upstream_work workio_get_work

workio_get_work runs under the workio thread. therefore doesn't use g_work or the miner threads' copy of work. get_upstream work defines ret_work as a pointer that gets set by calloc. There is no guarantee ret_work or ret_work->target are aligned.

The proposed fix is to define a loal stack based ret_work and make the syntax adjustment (add & to ret_work references) and delete the pointer def, calloc & free.

Edit: calloc zeros the buffer, it might be prudent to initialize ret_work to zero in case it's assumed to avoid needing to explicitly initializing some fields. Added below.

This might do it.

static bool workio_get_work( struct workio_cmd *wc, CURL *curl ) { struct work ret_work = {0}; // struct work *ret_work; int failures = 0; // ret_work = (struct work*) calloc( 1, sizeof(*ret_work) ); // if ( !ret_work ) // return false; /* obtain new work from bitcoin via JSON-RPC */ while ( !get_upstream_work( curl, &ret_work ) ) { if ( unlikely( ( opt_retries >= 0 ) && ( ++failures > opt_retries ) ) ) { applog( LOG_ERR, "json_rpc_call failed, terminating workio thread" ); // free( ret_work ); return false; } /* pause, then restart work-request loop */ applog( LOG_ERR, "json_rpc_call failed, retry after %d seconds", opt_fail_pause ); sleep( opt_fail_pause ); } /* send work to requesting thread */ tq_push(wc->thr->q, &ret_work ); // ignore return value // if ( !tq_push(wc->thr->q, ret_work ) ) // free( ret_work ); return true; }

I'm not entirely convinced this will work. There are other instances of dynamically allocate work structs. The problem seems to be that get_upstream_work manipulates the data which requires proper aligment. I'm hopefull the other structs treat it as a plain buffer and don't do any processing until it ends up in g_work which is guaranteed to be aligned. The proposed fix should solve the problem for GBT and getwork. I'll take a look for any other instances of dynamic allocation of work that may cause problems.

Edit: my confidence has increased. get_work allocs work_heap but simply copies the data to g_work without any processing.

Edit: I'm not so confident anymore. ret_work is is only freed if tq_push returns false. Unless that's a memory leak tq_push will free it before returning. I don't know what happens if free is called with a pointer to a local.

Edit: _mm_malloc should work. Add #include to cpu-miner.c then replace calloc with:

ret_work_ = (struct work*) _mm_malloc( sizeof (*ret_work), 32 ); memset( ret_work, 0, sizeof (*ret_work) )

and leave the rest of the function the same. That should be completely transparent but will guarantee 32 byte alignment.

YetAnotherRussian commented 1 year ago

Edit: _mm_malloc should work. Add #include to cpu-miner.c then replace calloc with:

retwork = (struct work) _mm_malloc( ret_work, sizeof (ret_work), 32 );`memset( ret_work, 0, sizeof (*ret_work) )

I see posix_memalign uses 3 arguments, but others like aligned_alloc or _mm_malloc use 2. What should I do?

JayDDee commented 1 year ago

I haven't used the others but I use _mm_malloc in other places in cpuminer-opt, I suggest sticking with that.

OOPS, I messed up the example, take ret_wrork out, will correct above.

YetAnotherRussian commented 1 year ago

There's no other places where're 3 arguments instead of 2. This doesn't compile, of course.

UPD. I see the update :)

Seems to be fixed, at least for yescrypt and argon2d

image

YetAnotherRussian commented 1 year ago

I've forgot to tell you before about this as well:

image

If you get several algo in gbt response, maybe it's better not to calculate net block ttf? It will never ever be correct, because networkhashps is a total value, not the current (selected by user) pow algo net hashrate.

image This is not correct, and won't be.

You've added minotaurx a few time ago, so Avian uses 2 pow algos as well.

It's safe to remove I think, this info is useless as the network targets block time, not an abstract ttf.

JayDDee commented 1 year ago

That's great! It bed time for me now, but I'll sleep on it in case any else comes to mind.

I'm not sure exactly how multiple algos work. If the coin doesn't provide the hashrate for the "pow_algo" there isn't very much I can do. It would be extra work to parse the message for multiple algos just to suppress the net hashrate from the log.

That TTF log looks suspiciously like garbage at startup. I already explained that. It takes a few samples for the hashrate to stabilize. Actually that's not it, it's because the TTF was calculated based on the total 50GHh/s. Still there's isn't much I can do about it.

Edit: someone mining a coin solo should know if it supports multiple alrorithms and also know that networkhashps and anything derived from it is unreliable. IMO.