MoneroOcean / xmrig

Monero (rx/0, rx/wow, rx/loki, defyx, rx/arq, rx/sfx, rx/keva, cn/0, cn/1, cn/2, cn/r, cn/fast, cn/half, cn/xao, cn/rto, cn/rwz, cn/zls, cn/double, cn/gpu, cn-lite/0, cn-lite/1, cn-heavy/0, cn-heavy/tube, cn-heavy/xhv, cn-pico, cn-pico/tlo, argon2/chukwa, argon2/wrkz, astrobwt) CPU/GPU miner
https://moneroocean.stream
GNU General Public License v3.0
272 stars 84 forks source link

RTM Benchmarking #86

Open johnnyde94 opened 2 years ago

johnnyde94 commented 2 years ago

One question I have is with the benchmarking for MO, if it’s only ~30 seconds how can it possibly get a good idea of RTM Hashrates. I am going to ask this to the devs and see if anyone thought of this lol.

Spudz76 commented 2 years ago

If you want to turn bench-algo-time up to like 3600 you can waste a lot of time getting a more accurate average if you prefer.

Although there could be some better hopping-blend of the available algos to get a quicker estimate I suppose. Still would take like 5 minutes otherwise it would just appear to be thorough (but not more accurate).

Better to just hand-update your algo-perf entry with a "reality" average rate from running paid jobs for a while. I also wouldn't hate if the miner continually updated the running average from actual real-world processing, in general. I do some hand-updating on most of my rigs to make the balance more accurate.

But also we're talking of when it cuts over to some other algorithm and that likely would be "close enough" if they are nearby pricing, does it actually matter if it's technically on the one that is worth 0.00000001 more? Or at some point does it make no noticeable difference if it's not KH's off?

Spudz76 commented 2 years ago

Checked some of my rigs:

I think mainly it's always going to be a moving target that is tough to predict, or assign any single performance number to... some rigs are pretty accurate, some are a ways off, but always within the range of the total-average. All the shorter windowed averages even 15m are wildly variant.

The only other benchmark method I could think of would be to log the performances of each of the ~5 different algos and then somehow come up with math that averages those with weights per how often they are selected. However the selection of the three-algo team is fully random so even that balance is not predictable. Every calculation method will fail due to too many things are randomly selected.

Spudz76 commented 2 years ago

And yes, testing for an hour would not actually help, since it never scrambles the set-of-three that it tests with (canned test job packet, can't vary).

johnnyde94 commented 2 years ago

The issue I can see is let’s say I do a 30 second benchmark for RTM and I have a 5950x and I hit a rotation that is light gives me 14k, in reality the average I get with my 5950x is like 4.5k. That original 14k hash rate will make it seem like RTM is the most profitable coin no matter what.The opposite could happen to where you get a hard rotation that gets 1.5k hashes then it will seem like RTM is not profitable at all.

If there was anyway to pull the average from GR while you run it so you are getting an actual average. I am not sure if that is a possibility or not idk but something I was just thinking about. Idk if there are any other algos that fluctuates a lot like RTM/GR if that’s the case the same thing could happen.

Idk also If RTM is just the most profitable coin right now so all of this doesn’t matter. I think since I have run MO since the beginning of the day I had it switch to crypto night I believe for a few minutes.

Just some ideas to throw out there.

Thanks, John DeLuca

On Dec 23, 2021, at 9:34 PM, Tony Butler @.***> wrote:

 And yes, testing for an hour would not actually help, since it never scrambles the set-of-three that it tests with (canned test job packet, can't vary).

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.

Dizzyikea commented 2 years ago

I found setting the benchmark to 120 seemed to give very similar results to the averages that my machines had done over many days on flockpool and the 120 setting seems to take 30 minutes? not sure what the 120 unit is? This was a varied type/number of CPUs from G4400 Pentiums to a 3800x

johnnyde94 commented 2 years ago

That would work great on windows the only issue is that on Hiveos it does not save the bench mark file. If you have to reboot once a month or so it’s not too bad but If you are tweaking with thinks then it becomes an issue.

John DeLuca

On Dec 27, 2021, at 4:05 PM, Dizzyikea @.***> wrote:

 I found setting the benchmark to 120 seemed to give very similar results to the averages that my machines had done over many days on flockpool and the 120 setting seems to take 30 minutes? not sure what the 120 unit is? This was a varied number of CPUs from G4400 Pentiums to a 3800x

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.

Dizzyikea commented 2 years ago

Yeah a valid point, i don't use HiveOS personally (or windows) but i don't think due to the nature of the algo this is something the devs can address. If anything this should be a improvement for HiveOS to address, as in linux/windows/JEOS/MACOS it would not be an issue

Spudz76 commented 2 years ago

The benchmark job always uses the same set of algos, including a 2MB one. So the benchmark would never hit the insane high speed results since it won't select pico/nano size algos.