Closed InonS closed 7 years ago
You need a device with Device Type
set to GPU
. Yours says CPU
?
It says: CL_DEVICE_TYPE_CPU ...
Oh. Youre the guy that posted a LinkedIn message that was "like"d by thousands :)
Welcome :)
Ok, so, modern Intel CPUs often contain GPUs. Which is not the 'CPU' bit of the CPU itself, but an extra component, inside the CPU. It's a bit confusing :-P
So, it turns out that your cpu does in fact have a GPU inside it, it's an HD4600: https://ark.intel.com/products/78930/Intel-Core-i7-4710HQ-Processor-6M-Cache-up-to-3_50-GHz
Then the next question is: does your HD4600 GPU, inside your 4710HQ CPU, support OpenCL? The page above doesnt say. But it sounds modernish: I used to have an HD4000, and that supported OpenCL, so let's see...
Googling for 'wikipedia hd', we get https://en.wikipedia.org/wiki/Intel_HD_and_Iris_Graphics#Capabilities . This shows that Haswell cpus have OpenCL 1.2 GPUs:
At this point, I conclude: you're missing the driver :) . You probably need to install a driver from the Intel website, eg something like ...hmmm... I can only find for windows https://downloadcenter.intel.com/product/97501/Graphics-for-5th-Generation-Intel-Processors . oh right: you need Beignet :)
(Note that you're very late to the party though; I'm working on other things at the moment; meanwhile tf-coriander is missing a cudnn replacement. I started working on one at https://github.com/hughperkins/coriander-dnn , but never quite got round to plugging it in to tf-coriander. I dont think it's tons of work. If I did have a moment, it'd probably take me ~40-80 hours. But of course if someone else does it, they have to learn everything from scratch, so could easily be 4-8 times that.
If you can find someone who could be interested in helping with that, I'm happy to assist them with knowledge acquisition, meet them in Hangouts etc.
The impact of not having cudnn currently is that convolutions run on CPU (not the GPU part of the CPU, the CPU bit). )
Wow, thanks Hugh!
Yeah, the response to that post surprised me too. I was so upset that TensorFlow doesn't support an open standard for GPGPU!
As for my particular case, I do have an AMD GPU in addition to the Intel one which is built into my motherboard. It's a Radeon R9 M265X. Pretty low-scale by today's standards, but It supports OpenCL, and it's frustrating that I can't use it with TensorFlow out of the box (as opposed to an nVidia card, I expect). I'm not even sure how much help it would be, considering I went and paid for a quad-core CPU.
I guess I need to re-check my driver installation. I tried using Ubuntu-based dockers, which is where I got the stacktrace and clinfo output in my original post. Maybe I should address AMD support?
Yeah, the response to that post surprised me too. I was so upset that TensorFlow doesn't support an open standard for GPGPU!
:)
Yes, impressive to write something that went so viral :)
I guess I need to re-check my driver installation. I tried using Ubuntu-based dockers, which is where I got the stacktrace and clinfo output in my original post.
Docker is not very OpenCL/GPU friendly. Docker does work ... with NVIDIA GPUs :-P . I'm not saying Docker cant be tweaked to work with AMD GPUs, but I've never heard of that being possible. For NVIDIA GPUs, you need some special additional drivers, eg https://github.com/NVIDIA/nvidia-docker , or at least pass the drivers through, using --device
option to Docker, like https://hub.docker.com/r/hughperkins/cltorch-nvidia/ But I've never heard of this being possible for AMD GPUs.
Your easiest options for AMD GPUs will probably be to use the AMD GPU directly from your OS, so one of:
If it was me, well .... so... I got into OpenCL, since I had a laptop with an Intel CPU, with an HD4000 inside, and I thought it was so cool that the CPU had a GPU inside, and wanted to play, and of course it wont work with CUDA, so I wrote https://github.com/hughperkins/DeepCL from scratch, incrementally, over ~6 months, so that I could play with using the HD4000 GPU :)
Later on though, I found that the Intel GPU, whilst fun, is not something I'd ever train an ml model on: aws works well for that, or at least, an NVIDIA GPU. There are no AMD cloud-enabled GPUs around that I can find.
Currently, I think that whilst it'd be good to have competition for NVDIA GPUs, to keep them on their toes, I'm not sure that AMD will be that competition, at least, not in a big way. I think that something like the Nervana TPUs might be a more realistic competition possibly? https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu
Agreed. On all points ;-)
I'm not sure if this is a "supported architectures" issue, or if there are more details I should give. What do you think?