Closed developeralgo8888 closed 5 years ago
@developeralgo8888 , see: #26, #48
Apart from algorithms side , the greatest speed bottleneck is actually a btgym/backtrader itself as environment iteration is pure python-based and therefore quite slow. To achieve significant speedup one should reimplement backtrader engine and btgym shell from scratch with lower-level language like C.
@Kismuz , Are you planning to reimplement it in Pure C ? . Not Sure how easy or difficult that would be . And probably would be very time consuming. Python uses a C variant -- Cython is a compiled language that generates CPython extension modules. Cython is a superset of the Python, designed to give C-like performance with code that is written mostly in Python.
I believe backtrader is uses optimized Cython to generate the executables . Of course it will not be very fast as Pure C but close to it.
Since Tensorflow , Backtrader and Open AI gym , actually use some form of C-like language in the Backend , its ok for now. But the Heavy Lifting ( training the model ) needs to go on GPUs, the CPUs can't do it even with multiprocessing
@Kismuz , Once the python code is written you simply need to cythonize it and it will be as fast as Pure C.
Cython code, unlike Python, must be compiled. This happens in 3 stages:
@developeralgo8888 ,
Are you planning to reimplement it in Pure C
Actually I don't, at least by near future. I think of BTgym as of research-driven project and such kind of optimisation is beyond my scope until some good core solutions will be found; current performance limitations roots from algorithmic and math side, not from low-M iterations I believe;
Heavy Lifting ( training the model ) needs to go on GPU
No one underestimates GPU power; it is nice and current BTgym algorithms framework could be adapted to synchronous version known as A2C, see here: https://blog.openai.com/baselines-acktr-a2c/ Still I don't plan any GPU opt just because I don't have access to any decent GPU to run tests, debug etc. All btgym code has been written and tested with old i7 Imac and I like it cause it forces to optimise math instead of threads :)
@Kismuz , Please can you add the GPU options so that multi-processing works with GPUs and CPUs . Right now it only works with CPUs. The Data Parallelism using MultiGPU training can be done on the GPUs ( Heavy lifting ) and then the gradient policy updates can be done on the CPU. That will speed up a lot of things.