Open v-a-s-a opened 4 years ago
@v-a-s-a I think GPU support should actually not be too difficult, and almost work out of the box. The main issue is, e.g. in Torch, that new tensors created are not automatically moved to the GPU, but this can of course be fixed. I will look into this over the next week!
That sounds fantastic! Let me know if there is any testing/benchmarking to be done. I'm curious if there are any appreciable gains to be had.
@v-a-s-a It turns out to be a busy time right now; I might need a few weeks to sort this out. I'll keep you up to date.
Sounds great @wesselb. I was poking around and wondering how easy this would be to setup. The GPU support isn't strictly necessary for me needs here. Feel free to close/open as you see fit.
I'm fitting a mid-sized time series model: two observations at 8k time points. There are some optimizations already available in GPAR, e.g. this can be fit with ~6k inducing points with some reasonable assumptions.
In the interest of performance, is it possible /advisable to try and enable GPU on the wrapped torch or tensorflow backends? The
stheno
,lab
andplum-dispatch
dependencies don't mention GPU support, but the flexibility they provide is tantalizing.