Closed bobobo1618 closed 7 years ago
Thanks @bobobo1618 - we've looked into the RAM in the past and its realistic. Each slice can be up to 1MB to store; we can be requesting hundreds/thousands of strikes-expiration and each one's data is cached in RAM awaiting the synchronization thread. Finally in the cloud its running on Mono which isn't as efficient as .NET windows so there's maybe 50% overhead there. You might try narrowing your strike filter.
Its a real pain I wish we could make it more efficient. Suggestions welcome otherwise will close this as we've already reviewed and its just a factor of backtesting options.
This is a pretty big deal of work but what about putting the data in a shared queryable database (like MSSQL or PostgreSQL) so that filters can be run before the data is loaded into memory by LEAN?
Also have you looked into running using .NET Core rather than Mono?
The filters are run intraday; so when the price moves if you've selected +-2 strikes from the market price the data we pull in will also change intraday. This filtering is not that intensive as its only done on a list of 100-500 strike prices; the intensive part is reading in the 100-500 contracts of data.
We did look at Core but it requires dropping back to .Net 2.0 and is still too beta. Once our library dependencies fully support it we can re-visit.
Running this strategy on QuantConnect.com fails after the process consumes around 2GB of memory. It seems to me that it shouldn't need to do that, since it should only be storing information per day and the algorithm itself stores very little state.