This can surface bottlenecks to tackle first like.
Does exposing the data takes too much time?
Is the trading logic of the template the biggest bottleneck?
What is the performance achieved under similar logic using Zipline?
We should be at least as fast as them.
What about running the same thing in Zorro, reaching them would be ideal, as it will match our speed with C.
If the outcome of this ticket also created a procedural way of benchmarking time analysis would be great, this could even be added as one of the version release steps to communicate improvements on computational efficiency (or warnings of decline of performance if new features add too much overhead)
This can surface bottlenecks to tackle first like.
Does exposing the data takes too much time? Is the trading logic of the template the biggest bottleneck?
What is the performance achieved under similar logic using Zipline?
We should be at least as fast as them.
What about running the same thing in Zorro, reaching them would be ideal, as it will match our speed with C.
If the outcome of this ticket also created a procedural way of benchmarking time analysis would be great, this could even be added as one of the version release steps to communicate improvements on computational efficiency (or warnings of decline of performance if new features add too much overhead)