Closed rohitgpt178 closed 3 years ago
Thanks for reporting this, @rohitgpt178! At present, the FIL backend supports only FP32 input and output, but it is troubling that attempting to use FP64 caused a crash. We'll look into it and update the docs to make it clear that FP32 is required.
In terms of your specific use case, are you concerned about degraded accuracy when using FP32, or are you looking for smoother integration with other models that use FP64 in a pipeline or something else entirely? If the concern is degraded accuracy, FIL internally converts everything to single-precision anyway, so taking advantage of FP64 would require a more significant change, and I doubt you would see a practical difference anyway.
Got it @wphicks. We are fine with using FP32 for our use case as far as the accuracy is concerned, though a smoother integration with fp64 might be helpful. I just found that the server is crashing and thought of reporting it. Also I was not aware that FIL converts data to single precision, thanks for informing.
Yes, thank you very much for reporting the crash! We definitely want to get that fixed, just figuring out what the best solution looks like for you. We'll get to work on it!
There's now a PR to fix the server crash (#119) and two new issues to explicitly support non-FP32 I/O in the future (#121 and #122). Once we merge #119, we'll close this issue and follow-up on any problems related to I/O type conversion in those new issues.
Cool, thanks @wphicks!
I am encountering the following error when I try to use FP64 data type for input to an xgboost regression model on fil backend. (Using perf_analyzer to send requests)
Input/output config -
Keeping both input & output FP32, or input FP32 & output FP64 is working fine.
Also I am able to use numpy.float64 input data on the same model using fil in rapidsai container.
The model is an example xgboost regression model generated from here.