Closed obigal closed 11 years ago
I had already implemented this, but I was scratching my head to remember where and when I implemented this. This is a part of the caching in basic_share_limiter.py that were blown away last week. We can do this two ways. The first way is really easy to implement.
I am more inclined to do option number one since it would be an easy code change. Below is the code that I would change in basic_share_limiter.py:90
if worker_name not in self.worker_stats or self.worker_stats[worker_name]['last_ts'] < ts - settings.DB_USERCACHE_TIME :
I ran this over night and then left the worker off for 11 minutes. The worker was reset it back to the POOL_TARGET correctly. I will push these changes now.
Can you open another issue for the ban? I am not sure what we would want to do for that because it may prevent valid users, so it will be a bit more involved.
Great! I will try this out in the next couple days. I'll open a new ticket for issue 2. Thanks
Finally got around to testing this and it works great!
Great. I will push it to the master. Thanks!
I would like to see per worker difficulty reset to pool target after idle more than 10 minutes or so, I have my pools minimum difficulty set to 6 to allow lower hash rate machines to get a few shares in on fast rounds and that works pretty well but if you are a higher hash rate user and say your difficulty adjusts to 350 for example and you disconnect for some time when you reconnect then that workers difficulty gets set to the minimum amount and floods the server with many low difficulty shares until the next re-target.
I would also like to see some kind of ban and/or disconnect or whatever if a user is sending x number of rejected shares in a row, I had a couple of instances where a user was sending nothing but rejects but everybody else was hashing just fine, must have been something on their end messed up.