Closed paxcema closed 3 years ago
Seems my intuition is right. Installing torch==1.7.1+cu110
fixes the issue and I can predict normally.
Okay, weirdly enough the error arises only the first time one queries the model. Afterwards, predictions work just fine.
The issue has to do with the way Lightwood detects whether there is a usable GPU. Somehow, doing torch.ones(1).cuda()
is not always enough to trigger a runtime error, like in this case. Doing torch.ones(1).cuda().__repr__()
works, triggering the RuntimeError
that signals Lightwood there's no GPU, and solves the issue.
Your Environment
use_gpu = False
in studioPlease describe your issue and how we can replicate it Predictor successfully trains on a clickhouse datasource with
metro_traffic_ts
dataset and standard time series options (order by date,window = 5
,use_previous_target = True
).However, when trying a single query from the GUI (based on first row of testing data split), prediction fails with the following stack trace:
Somehow, the model attribute inside the conformal predictor object stayed as
None
. The ICP .pickle file exists, so I suspect it's related to the CUDA error warning.