Closed bmino closed 5 years ago
One idea is to maintain the entirety of the depth cache locally, but only pass a subset of the cache to the calculation portion. This will hopefully keep calculation times (via memory usage) down, but still allow the full depth cache to be calculated or error out with an insufficient depth as per the configuration
Is this why it keeps losing a connection to the API every 5-15 minutes?
Can you give a head start on where this code block lives?
Naw, that's on something else likely network related. Without pruning I see calculation cycle times increase but the websockets don't reconnect more frequently in either approach.
Calculation time is derived (currently) in Main.js and measures the time taken to do calculations in CalculationNode.js
I'm working on a few solutions in the develop branch currently
Fixed in version 5.2.0 by trimming the depth cache data fed to the calculation cycle. Hard pruning can be re-enabled with the DEPTH.PRUNE
flag which will revert to the behavior found in v5.1.1 and prior
After removing the depth pruning functionality in release 5.1.2 running the bot with a "trace" log level will reveal that calculation times continue to increase over time. The average depth cache size also increases over time and I suspect this is what is causing the issue.
Upon initial investigation I think that the worst case scenario a depth is pruned (removed) and a future calculation could calculate a higher cost or lower sell for an asset. This would be favorable, but nonetheless inaccurate which is why pruning was removed