Open williamstein opened 7 years ago
I will be very interested in such opportunity! While waiting, I modified CoCalc Docker image to enable Nvidia driver and CUDA libraries to run it on my GPU-equipped desktop using nvidia-docker. It works as expected, and I can even use GPU-accelerated Python libraries from Jupyter. If anyone is interested, I uploaded my Dockerfile to GitHub
@ktaletsk cool, I've added a link at the bottom of https://github.com/sagemathinc/cocalc-docker#links
Maybe this should somehow be merged, e.g. having two dockerfiles? Anyways, thanks for sharing, I'm sure there might be some interested in this.
@haraldschilly thanks. I tried merging CoCalc and standard Nvidia CUDA package in one, but compiling took such a long time, that I just decided to build on top of compiled CoCalc container. Also, there was a mismatch in Ubuntu versions at the time, which don't exist now.
Several people asked about this at the JMM booth as well.
@ktaletsk Thanks very much for this. After upgrading to docker-ce I built this with no problem. You wrote: "use GPU-accelerated Python libraries from Jupyter". I would be very interested in seeing some example worksheets. Is there anything that might serve as a useful benchmark?
Basically, customers should be able to pay extra and run their project with access to a GPU for a certain amount of time. Google Compute Engine fully supports this, so it’s do-able.
This is along the lines of related special computations, e.g., access to Google TPU’s, special machine learning API’s, etc.
REQUESTED BY: Mathieu Dumas