maxholloway / Backtest.jl

Very simple event-based backtesting platform.
MIT License
0 stars 0 forks source link

Partition Testing #2

Open maxholloway opened 4 years ago

maxholloway commented 4 years ago

Partition Testing

Background

Suppose I make an algorithm and I want to test how well it performs. One approach to testing would be to simply run the algorithm in one swoop over all of my data. This approach is the default for platforms like quantopian or backtrader. Another approach would be to first partition the data set into n sub-data sets, then perform the backtest on each of those n sub-data sets.

Benefits of Partitioning

  1. It allows for a tighter confidence interval around performance. In general, we aren't trying to find a strategy that will perform incredibly over a 5 year period. While that would be nice, there just isn't enough data to form a solid confidence interval around how this algo would perform when deployed (you only have probably one relevant 5-year sample). Thus, it is clear that there is a trade-off between long-term testing and test confidence. That is, if a test is longer, then we cannot partition our data into as many sub-tests, and thus cannot form a confidence interval for our mean performance (or do any analysis of our sub-test results distribution of outcomes).
  2. Speed. Since all of the sub-tests are run independently of each other (and each would presumably take some time), we could run them in parallel. For example, if you have 8 CPU cores, you may be able to run the tests in just over 1/8 the time it would take to run a full test. That would be nice!
  3. Easy integration with train-validation-test splitting. In the process of making this feature, we would also create generic code for partitioning a data set by DateTime. Once this is in place, we can seamlessly use the partitioning functionality to make a train-validation-test partitioning of our data set.

Interface

Options

Option 1

Since this is an entirely new type of test, it may be best to make this into a new run function. The first version of run is for a continuous block of time. However, it might be nice to overload this function with another run. This new function could have one (or many!) of the following signatures:

  1. run(::StrategyOptions, [(...Dates.DateTime args)]) [cutoffs between partitions]
  2. run(::StrategyOptions, ::Integer) [number of partitions]
  3. run(::StrategyOptions, ::Number[0,100], ::Number[0,100]) [proportion of dataset to be used for train and validation, respectively]

Option 2

Add various arguments to StrategyOptions that achieve the desired sampling. These arguments may include downsample, train, validation, partitionproportions, etc.

Preferences

I would prefer to see Option 1, for the following reasons:

  1. Separation of concerns. If we have a working version of run(::StrategyOptions) (the original rendition of backtest runner), then we can add new run types without needing to re-test run(::StrategyOptions).
  2. Argument parsing & extensibility. If we add the optional partitioning arguments, then we'd need to add logic inside of our original run(::StrategyOptions) function to account for what type of backtest we're running. This can lead to a lot of spaghetti code if we decide to add various partitioning features.

Implementation (assuming we go with Interface Option 1)

Mid-Level Steps (for any type of partitioning function)

  1. Calculate the start and end time for all of the partitions (or potentially receive this as input).
  2. Re-generate datareaders so that the number of datareader copies is the same as the number of partitions.
  3. Adjust StrategyOptions as necessary.
  4. Invoke run(::StrategyOptions) for each of the StrategyOptions objects.

Questions and Concerns

  1. If we re-generate all of the datareaders, will there be issues with storing the data in memory with DataReaders.InMemoryDataReader? If so, how would we adjust our approach as we use more data, or as we create more partitions?