Heerozh / spectre

GPU-accelerated Factors analysis library and Backtester
GNU General Public License v3.0
627 stars 108 forks source link

engine.to_cpu() still requires the presence of gpu #3

Closed jibanes closed 4 years ago

jibanes commented 4 years ago

In the benchmark snippet published in README.md, I've replaced engine.to_cuda() with engine.to_cpu(); below is the stack trace:

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579022060824/work/aten/src/THC/THCGeneral.cpp line=50 error=35 : CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
  File "example1.py", line 59, in <module>
    results = trading.run_backtest(loader, MyAlg, '2013-01-01', '2018-01-01')
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/trading/__init__.py", line 152, in run_backtest
    evt_mgr.run(start, end, delay_factor)
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/trading/algorithm.py", line 255, in run
    data, _ = run_engine(start, end, delay_factor)
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/trading/algorithm.py", line 162, in run_engine
    df = self._engines[name].run(start, end, delay_factor)
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/factors/engine.py", line 271, in run
    self._prepare_tensor(start, end, max_backwards)
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/factors/engine.py", line 128, in _prepare_tensor
    self._groups['asset'] = ParallelGroupBy(keys)
  File "/home/jibanes/anaconda3/lib/python3.7/site-packages/spectre/parallel/algorithmic.py", line 28, in __init__
    inverse_indices = sorted_indices.new_full((groups, width), n + 1).pin_memory()
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /opt/conda/conda-bld/pytorch_1579022060824/work/aten/src/THC/THCGeneral.cpp:50

shouldn't engine.to_cpu() allow spectre to run on machines without a GPU (and/or cuda). Thanks.

Heerozh commented 4 years ago

Thanks, will be fixed in the next commit.