This repository is a fork of Snipa22's nodejs-pool which is maintained by Venthos. This repository will most closely follow the needs and requests of the Lethean community, as that is the coin this was originally forked to accommodate.
However, this fork contains fixes and features that would be useful to any cryptonight/cryptonight-lite coin pool operator.
Other coins will likely be supported, although not prioritized or as thoroughly tested.
Coin | Name | Coin File | Supported | Tested |
---|---|---|---|---|
LTHN | Lethean | lthn | :white_check_mark: | :white_check_mark: |
XMR | Monero | xmr | :white_check_mark: | :white_check_mark: |
https://lethean.blockharbor.net is the reference pool that utilizes this repository. Take a look at the pool for a production example of this codebase.
While the UI in use by BlockHarbor.net is a custom fork of miziel's poolui, the original poolui will work with this nodejs-pool fork out of the box. Any other forks of poolui will likely work, too.
The nodejs-pool is built around a small series of core daemons that share access to a single LMDB table for tracking of shares, with MySQL being used to centralize configurations and ensure simple access from local/remote nodes. The core daemons follow:
Daemon | Listen Port | Description |
---|---|---|
api | 8001 | Main API for the frontend to use and pull data from. Expects to be hosted at / |
remoteShare | 8000 | Main API for consuming shares from remote/local pools. Expects to be hosted at /leafApi |
pool | Configurable | Where the miners connect to. |
longRunner | N/A | Database share cleanup. |
payments | N/A | Handles all payments to workers. |
blockManager | N/A | Unlocks blocks and distributes payments into MySQL |
worker | N/A | Does regular processing of statistics and sends status e-mails for non-active miners. |
nodejs-pool scales from being able to operate on a single server to being able to utilize multiple servers. A few common setups are listed below, which use a few additional service terms:
In this configuration, all daemons operate on a singular box. Straight forward enough.
Server | Services |
---|---|
Single Server | caddy, crypto-daemon, crypto-wallet, mysql, lmdb, api, remoteShare, pool, longRunner, payments, blockManager, worker |
Pros
Cons
This is what lethean.blockharbor.net currently utilizes, as a compromise in offering geographically convenient servers for miners to use while keeping costs lower than they have to be. This setup is the minimum barrier to provide two pool servers that miners can use.
Server | Services |
---|---|
Main Server w/Pool Node | caddy, crypto-daemon, crypto-wallet, mysql, lmdb, api, remoteShare, pool, longRunner, payments, blockManager, worker |
Pool Node | crypto-daemon, pool |
Pros
Cons
If you are willing to invest in the infrastructure, this is the ideal setup that provides for the most scalability and features. This separates the components out.
Server | Services |
---|---|
Front End Server | caddy |
Back End Server | crypto-daemon, crypto-wallet, mysql, lmdb, api, remoteShare, longRunner, payments, blockManager, worker |
Pool Node 1 | crypto-daemon, pool |
Pool Node 2 | crypto-daemon, pool |
It is critically important that your webserver does not truncate the /leafApi
portion of the URL for the remoteShare daemon, or it will not function! Local pool servers DO use the remoteShare daemon, as this provides a buffer in case of an error with LMDB or another bug within the system, allowing shares and blocks to queue for submission as soon as the leafApi/remoteShare daemons are back up and responding with 200's.
The simplest and most straight forward option is to use the included steps to setup the caddy service to handle front end processing.
Sample Caddyfile for API:
https://api.lethean.blockharbor.net {
proxy /leafApi 127.0.0.1:8000
proxy / 127.0.0.1:8001
cors
gzip
}
You are welcome to utilize an existing webserver if you are more comfortable with it or simply prefer it. With the requests made by the frontend poolui, I would recommend making sure that the web server is HTTP2 compliant.
Here's an example for Apache that uses ProxyPass
<Location "/api">
ProxyPass http://127.0.0.1:8001
ProxyPassReverse http://127.0.0.1:8001
</Location>
<Location "/leafApi">
ProxyPass http://127.0.0.1:8000/leafApi
ProxyPassReverse http://127.0.0.1:8000/leafApi
</Location>
The below should be considered bare minimum requirements for a pool that is just starting out. The more workers connected to your pool and the more visitors looking at the frontend, the heftier these requirements get.
Single/Main Server
Leaf/Pool Node
Note: The pool comes configured to use up to 24Gb of storage for LMDB. Assuming you have the longRunner worker running, it should never get near this size, but be aware that it can bloat readily if things error, so be ready for this!
/etc/sudoers
, this must be done so the script can sudo up and do it's job. We suggest passwordless sudo. Suggested line: <USER> ALL=(ALL) NOPASSWD:ALL
. Our sample builds use: pooldaemon ALL=(ALL) NOPASSWD:ALL
config.json
appropriate. It is pre-loaded for a local install of everything, running on 127.0.0.1. This will work perfectly fine if you're using a single node setup. You'll also want to set bind_ip
to the external IP of the pool server, and hostname
to the resolvable hostname for the pool server. pool_id
is mostly used for multi-server installations to provide unique identifiers in the backend. You will also want to run: source ~/.bashrc This will activate NVM and get things working for the following pm2 steps.poolui/build/globals.js
and poolui/build/globals.default.js
-- This will usually be http(s)://<your server FQDN>/api
unless you tweak caddy!/home/<username>/pool_db/
is already been created during startup. If you change the db_storage_path
just make sure your user has write permissions for new path. Run: pm2 restart api
to reload the API for usage.http://<your server IP>/admin.html
), then login with Administrator/Password123
, MAKE SURE TO CHANGE THIS PASSWORD ONCE YOU LOGIN. <- This step is currently not active, we're waiting for the frontend to catch up! Head down to the Manual SQL Configuration to take a look at what needs to be done by hand for now.cd ~/nodejs-pool/
pm2 start init.js --name=blockManager --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=blockManager
pm2 start init.js --name=worker --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=worker
pm2 start init.js --name=payments --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=payments
pm2 start init.js --name=remoteShare --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=remoteShare
pm2 start init.js --name=longRunner --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=longRunner
pm2 start init.js --name=pool --log-date-format="YYYY-MM-DD HH:mm Z" -- --module=pool
pm2 restart api
Install Script:
curl -L https://raw.githubusercontent.com/Venthos/nodejs-pool/master/deployment/deploy.bash | bash
The installer assumes that you will be running a single-node instance and using a clean Ubuntu 16.04 server install. The following system defaults are set:
/root/.my.cnf
The following raw binaries MUST BE AVAILABLE FOR IT TO BOOTSTRAP:
I've confirmed that the default server 16.04 installation has these requirements.
The pool comes pre-configured with values for Lethean (LTHN), these may need to be changed depending on the exact requirements of your coin. Other coins will likely be added down the road, and most likely will have configuration.sqls provided to overwrite the base configurations for their needs, but can be configured within the frontend as well.
The pool is designed to have a dual-wallet design, one which is a fee wallet, one which is the live pool wallet. The fee wallet is the default target for all fees owed to the pool owner. PM2 can also manage your wallet daemon, and that is the suggested run state.
/usr/local/src/lethean/build/release/bin/lethean-wallet-cli
~/wallet_pass
chmod 0400 ~/wallet_pass
pm2 start /usr/local/src/lethean/build/release/bin/lethean-wallet-rpc -- --rpc-bind-port 48783 --password-file ~/wallet_pass --wallet-file <Your wallet name here> --disable-rpc-login --trusted-daemon
Pretty similar to the above, you may wish to dig through a few other things for sanity sake, but the installer scripts should give you a good idea of what to expect from the ground up.
Until the full frontend is released, the following SQL information needs to be updated by hand in order to bring your pool online, in module/item format. You can also edit the values in sample_config.sql, then import them into SQL directly via an update.
Critical/Must be done:
pool/address
pool/feeAddress
general/shareHost
Nice to have:
general/mailgunKey
general/mailgunURL
general/emailFrom
SQL import command: sudo mysql pool < ~/nodejs-pool/sample_config.sql (Adjust name/path as needed!)
The shareHost configuration is designed to be pointed at wherever the leafApi endpoint exists. For lethean.blockharbor.net, we use https://lethean.blockharbor.net/leafApi. If you're using the automated setup script, you can use: http://<your IP>/leafApi
, as Caddy will proxy it. If you're just using localhost and a local pool serv, http://127.0.0.1:8000/leafApi will do you quite nicely
Additional ports can be added as desired, samples can be found at the end of base.sql. If you're not comfortable with the MySQL command line, I highly suggest MySQL Workbench or a similar piece of software (I use datagrip!). Your root MySQL password can be found in /root/.my.cnf
Until the main frontend is done, we suggest running the following SQL line:
DELETE FROM pool.users WHERE username = 'Administrator';
This will remove the administrator user until there's an easier way to change the password. Alternatively, you can change the password to something not known by the public:
UPDATE pool.users SET email='your new password here' WHERE username='Administrator';
The email field is used as the default password field until the password is changed, at which point, it's hashed and dumped into the password field instead, and using the email field as a password is disabled.
You should take a look at the wiki for specific configuration settings in the system.
If upgrading the pool, please do a git pull to get the latest code within the pool's directory.
Once complete, please cd
into sql_sync
, then run node sql_sync.js
This will update your pool with the latest config options with any defaults that the pools may set.
This is likely due to LMDB's MDB_SIZE being hit, or due to LMDB locking up due to a reader staying open too long, possibly due to a software crash. The first step is to run:
mdb_stat -fear ~/pool_db/
This should give you output like:
Environment Info
Map address: (nil)
Map size: 51539607552
Page size: 4096
Max pages: 12582912
Number of pages used: 12582904
Last transaction ID: 74988258
Max readers: 512
Number of readers used: 24
Reader Table Status
pid thread txnid
25763 7f4f0937b740 74988258
Freelist Status
Tree depth: 3
Branch pages: 135
Leaf pages: 29917
Overflow pages: 35
Entries: 591284
Free pages: 12234698
Status of Main DB
Tree depth: 1
Branch pages: 0
Leaf pages: 1
Overflow pages: 0
Entries: 3
Status of blocks
Tree depth: 1
Branch pages: 0
Leaf pages: 1
Overflow pages: 0
Entries: 23
Status of cache
Tree depth: 3
Branch pages: 16
Leaf pages: 178
Overflow pages: 2013
Entries: 556
Status of shares
Tree depth: 2
Branch pages: 1
Leaf pages: 31
Overflow pages: 0
Entries: 4379344
The important thing to verify here is that the "Number of pages used" value is less than the "Max Pages" value, and that there are "Free pages" under "Freelist Status". If this is the case, them look at the "Reader Table Status" and look for the PID listed. Run:
ps fuax | grep <THE PID FROM ABOVE>
ex:
ps fuax | grep 25763
If the output is not blank, then one of your node processes is reading, this is fine. If there is no output given on one of them, then proceed forwards.
The second step is to run:
pm2 stop blockManager worker payments remoteShare longRunner api
pm2 start blockManager worker payments remoteShare longRunner api
This will restart all of your related daemons, and will clear any open reader connections, allowing LMDB to get back to a normal state.
If on the other hand, you have no "Free pages" and your Pages used is equal to the Max Pages, then you've run out of disk space for LMDB. You need to verify the cleaner is working. For reference, 4.3 million shares are stored within approximately 2-3 Gb of space, so if you're vastly exceeding this, then your cleaner (longRunner) is likely broken.
If you're considering PPS, Snipa22 spoke with Fireice_UK whom kindly did some math about what you're looking at in terms of requirements to run a PPS pool without it self-imploding under particular risk factors, based on the work found here
Also I calculated the amount of XMR needed to for a PPS pool to stay afloat. Perhaps you should put them up in the README to stop some spectacular clusterfucks :D:
For 1 in 1000000 chance that the pool will go bankrupt: 5% fee -> 1200 2% fee -> 3000
For 1 in 1000000000 chance: 5% fee -> 1800 2% fee -> 4500
The developers of the pool have not verified this. You should be wary if you're considering PPS and take you fees into account appropriately!
If you'd like to make a one time donation, the addresses are as follows:
Venthos - Maintainer/Developer of this fork of nodejs-pool
Coin | Donation Address |
---|---|
XMR | 46BvkmZwu5bdPrfHguduUNe43MX9By6vsEAPASdkxjvWfXsoPcJbEXWi1LFm7Vroo2XLDThDzwtqRRehWSeSYhGoCLzg1tY |
LTHN | iz5imhe9C7vWnjZtZBFtT8MwNxVuJuryUUHXSAtnWUo93CJzNdZBizHQExPRCHUBi36tk2BcigPAFRDA4cnddGXF1R6j69n3w |
Snipa22 - Original nodejs-pool from which this has repo been forked
Zone117x - Original node-cryptonote-pool from which, the stratum implementation has been borrowed.
Mesh00 - Frontend build in Angular JS XMRPoolUI
Wolf0/OhGodAGirl - Rebuild of node-multi-hashing with AES-NI node-multi-hashing