tensorflow / decision-forests

A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Apache License 2.0
657 stars 108 forks source link

Please support GPU #38

Open Howard-ll opened 3 years ago

Howard-ll commented 3 years ago

Background My tensorflow codes work on GPU. They have some matrix operations which can be done fast on GPU. If they run with tfdf, the data must be downloaded from GPU & uploaded to GPU when classification is done. In terms of throughput, this is a great loss.

Feature Request Please support GPU especially for inference like predict function. Training can take times because an user can try various configurations to find the best one. This is understandable. However, applying the trained model must meet the runtime requirement.

shayansadeghieh commented 2 years ago

Hi there, Following along the above comment, I was just curious whether or not GPU support is being implemented in the near future?

Thank you!

janpfeifer commented 2 years ago

hi @shayansadeghieh, while we would also very much love to have it, our high priority bucket list is still very full :( so from ourside we will not likely work on this in the near future. Accelerators (GPU, TPU, etc) is in our TODO list though.

While inference would be simpler to do, leveraging GPU/TPUs for training would be much harder. Notice DF algorithms doesn't do many floating point operations (other than calculating the scores at each level of the tree). Inference could be accelerated more easily though -- we did a draft in the past.

Maybe some interested developer would contribute it ?

shayansadeghieh commented 2 years ago

Hi @janpfeifer Thank you for the quick response. No worries that it is not in your high priority list, I was just curious. Do you by any chance have a link to the draft you previously did for inference?

janpfeifer commented 2 years ago

I don't have a link because we never open-sourced that more experimental code. Let me check here if it would be easy to make it visible.

Btw, notice the CPU implementation can be really fast, depending on the inference engine used:

rstz commented 1 year ago

Hi everyone, just wanted to share some quick tangentially related info: While there is still no GPU-implementation for TF-DF, TF-DF models can now run on even faster FPGAs for really fast inference through the Conifer project. While this is still very much experimental, feel free to contact us if this is relevant for you.