Closed mmitech closed 8 years ago
Launch another server and enable only proxy module. It must connect to your one and only redis instance. Make sure you disabled payouts and unlocker on this server.
BTW, would be cool to know URL of your pool.
Yes, I will post a link when I finish the setup and testing, and of course I will contribute a small donation as a thanks for the hard work you did :)
FYI, since all nodes on different servers must talk to single redis-server instance, it requires some security, plain password for redis is not enough, because redis is very fast and passwords are easy brute-forceable. I suggest using stunnel, it's very advanced procedure and requires some understanding. You can find example configs here https://gist.github.com/sammy007/7cb5254d46e5a7c2fbda (for another service, but usable for redis). You need to create shared certificate. If you will use it, make sure that you can't connect to redis with different cert or without it (because it's easy to make configuration mistake with stunnel, especially verify
option).
Some people will be happy with just iptables and restrict access by source IP, but there are spoofing can be performed.
yes, for now I will restrict the connection to IP only while testing, then I will take all security precautions before I go live (assuming I will do so), but thanks for the hint.
For others reading this conversation, you can find a simplle explanation with an easy guide on how to setup a SSL tunnel here
Unfortunately this article is broken like every article I have seen. There is nice comment:
"Then I set verify = 3, which causes both the client and server to validate against one another.".
You must set verify = 3
.
OK so verify must be set to 3. won't forget this
The pool works great, very low memory/CPU use, clean and stable code, there are just two improvements that could really convince miners to switch to this software:
I also did notice that the reported hashrate is less than what we get reported in dwarfpool (at least 10%), I will see if this effect the reward as well.
For now I am calling the API and collecting data about account/workers and based on that data I have stats for each worker and send email notifications in case of a dead worker, I am not very familiar with go and redis databases, otherwise I would add support to this pool my self.
Increase large window to 6 hours and you will get more accurate numbers. Also don't think it will give you accurate numbers after 15 minutes of mining.
Tried that as well, the hashrate is really off, 10-25 % depending on the rig, a rig with 4 cards that hashes at more than 80 Mhs is showing 61 Mhs at the pool, a rig with two cards hashing @ 45 Mhs is showing 25 Mhs.
We usually take the average hashrate and divide it by 22 and round it to get the number of active cards in a rig and if a rig has a lower number of cards than we have set we get an alert to check it.
the stats are totally off as seen in the picture: (most rigs have 3 cards)
BTW: This pool is not public yet, so please don't list it yet, I don't feel comfortable making it public before I test and fix everything.
I definitely will look onto it, but since 2015 when I was running my own pool using this code I received only good testimonials, people was happy that their effective hashrate is always correct and profits are to die for. There was 1 rig with 5x280x mining for more than month and this miner received exactly equal amount of ETH which was equal to a number of blocks this miner found.
I just bought brand new PSU so I will test it prolly on your pool. Also I hope your difficulty is not less than 2 Billions. I also suggest to use 4 billions for better GPU load.
my diff is 2 Billions, I did raise it to 5 but that didn't seem to effect the hashrate, my comment is not a negative testimonial, it is a contribution :)
Higher diff = better GPU efficiency. Proven by my long-term valued customer. Maybe other pools just reporting data from ethminer and its eth_submitHashrate
which I considered a lie. There is only effective hashrate reported based on shares submitted.
yes, maybe this is the trick, then I just have to wait it our to see the long time average of 24 hours.
hashrateLargeWindow
is 3h by default. You have to increase it if you need 24h average. But I don't recommend to make it so large on a public service with big hashrate, there will be large shares backlog.
I have it on 3 hours, but I am calling the API and logging the data on another server, this way I get to have charts and email alerts for dead workers and rigs that loses a card.
Regarding email notifications, worth consideration, but for now I highly recommend to use something like http://serverdensity.io, with 10 USD/mon you can configure SMS/email notification. Configure service check to poll http API with your rig and check for "offline": true
in a response.
I developed a fast and dirty monitoring system (using ethereumpool design) in php that works just fine, workers get detected and added automatically if they appear in the API, I record their state every minute to be able to get charts, and send email alert in case of dead workers.. no additional costs we have enough resources in our DC.
I see that everything is ok with hashrate, at least for me. I mined for more than 2 hours with a single R9 290. My hashrate is ~23 MH. I caught bad luck in the end, that's why short average is low, but it spiked to 40 on start. I suggest to increase short window, 15 mins is not enough for this difficulty and can be confusing.
Ethminer hashrate reporting live its own life and sometimes can report whole hashrate of multi-GPU rig even if some GPUs stuck during mining session, that's main reason why I don't trust it and pool always calculates effective hashrate from shares. Also explains why there is a difference between pools.
OK I see, I will play more with it, I donated a little something for your trouble. Thanks.
I will close this issue when I have an update.
Which Ethminer do you use btw ?
Which Ethminer do you use btw ?
I used latest webtree-umbrella installation (win 10).
I tried to send an email to you, no luck. Please add backlink to this repo. Thanks.
https://github.com/sammy007/open-ethereum-pool/blob/master/www/app/index.html#L27
What is wrong with having it here ? (deleted Private Pool)
This won't be a public pool, most likely it will stay private. I will probably lock the sub-domain later.
Edit: my email is on my profile :)
NM. It's ok, I see it now.
I can't find what do I have to change to keep dead workers in the list for at least 24 hours.
Set largeWindow
to 24h also set hashrateExpiration
to 24h. Will be ok for private setup on high difficulty, can't recommend it for public.
OK, I am updating you (and my self and anyone following), Ethminer submit the hashrate to Stratum-proxy which submit it to pools (dwarf and other pools) this is why rigs showed way higher HR on dwarf than it is showing now.
This pool calculates the HR based on the submitted HR, So what we've did is that we used few identical rig setups with Claymore dualminer and with Ethminer + Stratum Proxy, and I must say that Claymore is ripping it off. Claymore shows an accurate HR which solves my monitoring issues.
The only remaining problem is that we run miners on proxies, every proxy represents an account (owner), so we have for example 6 rigs on one proxy, this way when we need to change settings we do it only on proxies and not on miners (a dozen of miners are ok but when you have hundreds it becomes a problem), the problem is that Claymore support only stratum and stratum-proxy doesn't have support for Stratum<=>Stratum.
My question here is if you know of a way that I would adapt your proxy to server Stratum<=>Stratum connections only, please contact me on mourad.mlik@netis.si and we will discuss this further.
Edit: Claymore just told me he is working on such thing, it will be released in a few days.
Ethminer submit the hashrate to Stratum-proxy which submit it to pools (dwarf and other pools) this is why rigs showed way higher HR on dwarf than it is showing now.
I believe I said it clearly that this pool only measures HR from shares, in other words only effective hashrate. And yes, other pools has bad habit to accept hashrate reported by ethminer via eth_submitHashrate
.
I will reply to the rest of questions later, busy with adding/testing new features ATM. Get ready for pool fees (profit) withdrawal today or tomorrow :)
Amazing :) keep up the good work, anyway shoot me an Email or add me on Skype so we can discuss things quiclky my nick is: mmitec
Well, if you need to switch between pools with Claymore (notice forced fees there) you can set up HAProxy - tcp load balancer with failover/round-robin strategies.
Then we will have to run it (many instances) with Cygwin on Windows, I want the operators to have a clean and easy way to not complicate their job even more.
I would rather edit your proxy, it is clean, it offers GUI, I can't have it in a better way. so ping me in private and lets discuss it.
Will close this for you, since everything is working and there is no point for having this open.
mmitech Can you give instructions on how to add graphics to a pool as you have
Hi There,
I am having an issue, I am able to connect my CPU miner on my stratum port, but when I am trying to connect GPU miner it return the rpc error and not able to connect.
Please help me on this issue.
Thanks, Ashish
hello, I followed the instructions described in this post, however the results were not completely correct, despite having a node reporting to the "master" I get 900ms latency from the "slave" to the "master", to which this is due ?
thanks!
Hello,
you've mentioned here that it is possible to scale the pool with regional nodes, I've been trying to understand how would that work, I've already managed to deploy the main server that contains the main geth and its failover, API and Payments and I am curious or in other words if you can explain to me how can I connect 2 other nodes (Asia, North America) in term of the config ?