Open WKNiGHT- opened 11 years ago
I am still thinking of the best way to accomplish this. So far it comes down to a few scenarios.
My favorite 2. We would have a local database for each of the stratum nodes. The nodes would use api calls to the front end for authentication and to batch submit shares. This way one could spin up numerous stratum nodes and load balance them with persistence. Also it isolates the databases. So the front end and back ends can change their databases without any need to worry about the other. This makes development of different packages more segmented.
In all of these, I still see the cron job server as a single point of failure. It will need to be a failover script that starts running the scripts when that server is down. In all reality if we are only looking to prevent DDoS attacks, we can just have distributed external servers like the web front end and stratum. If we are talking about true redundancy we want multiple servers so that we can take any one part down for maintenance (patching, planned and unplanned outages, etc).
All of these ideas need to have some brainstorming on security before implementing them. I think public/private ssl would work well here, but adds a layer of complexity to the installation.
Any other suggestions are welcome as well.
Would it not be possible to hack the current code to store shares in queue if connection to db cannot be made?
I think this is how it works atm... it stores in queue until it can make the connection then submit
Right now it puts it into a queue and inserts them all at the same time. You can configure how large the queue is before it is inserted. The settings below are the details that can be changed.
DB_LOADER_CHECKTIME = 15 # How often we check to see if we should run the loader
DB_LOADER_REC_MIN = 5 # Min Records before the bulk loader fires
DB_LOADER_REC_MAX = 50 # Max Records the bulk loader will commit at a time
DB_LOADER_FORCE_TIME = 120 # How often the cache should be flushed into the DB regardless of size.
So with this we flush the cache every DB_LOADER_FORCE_TIME seconds or when we have more than DB_LOADER_REC_MIN in the queue. We check the queue every DB_LOADER_CHECKTIME seconds and will do batch inserts up to DB_LOADER_REC_MAX records.
Right now if the MySQL connection is cone it will try to reconnect every DB_LOADER_CHECKTIME seconds until it can insert the shares. It tried to connect to the database on every iteration.
The shares are not persistent, but yeah this is how it currently works. It will fail on anything that is not cached, and when the user cache is flushed after DB_USERCACHE_TIME seconds.
@moopless
General question: Does it lead to any problems, if i run two or more moopless stratum nodes using the same share database? These nodes insert their shares into the same share table. Does your stratum implementation look into the last share in the db, to know what already has been hashed and then knows what to hash next? that would be problematic, because 2 stratum nodes would distribute more or less the same work to the different miners.
Or does each stratum node randomly try to guess the solution for an block? (independent of whats in the share datebase)? then 2 nodes with same share database would be no problem.
Thank you very much for your help and maintaining this great repository!
@ppanther1000 Right now it only dumps the shares in the database. It does query users from the database, but it does not have any select statements on the shares. So using 2 nodes on the same database should work fine. I would advise changing the instance id as that is how the extranonce is generated.
@moopless
Node1 has INSTANCE_ID = 31 So INSTANCE_ID = 30 would be ok for Node2? Or better choose a lower value like 10? Or does the distance between these two values not play any role?
@ppanther1000 INSTANCE_ID = 30 would be fine. Basically we just bitwise shift this value here https://github.com/moopless/stratum-mining-litecoin/blob/master/lib/extranonce_counter.py#L13. As long as it is different, we should have a unique extraonce. Below is that same example of INSTANCE_ID = 30 or 31
>>> test = 31
>>> test << 27
4160749568L
>>> test = 30
>>> test << 27
4026531840L
One thing faced with many litecoin pools is DDOS. most of the time they hit the main pool and take everything down since its all one location. However, having mining nodes have proven to be an asset for larger pools. Locations are throughout the world.
With that being said if there is a way to add something for stratum to store its own shares in a node.. using mysql.. then a script would push these shares to the main database every minute or 10 mins.. whatever.. if the main database goes down.. shares are continued to be mined and entered into the database on the node. once the main comes back it catches up all shares.
This way there is always mining going on against a litecoind on the mining node.. shares are submitted because the worker information is in the same database.