I am trying to use your code in order to reproduce the results, but I hit kind of a brick wall. I managed to deploy the application on a CPU-only system, where I get around 2 to 3 it/s.
Now I am trying to use a GPU (Quadro P4000), but I do not get any speed ups at all, I remain at 1 it/s, while the GPU is running at full power. And after a while I get an out of memory error (8GB).
Is this something you also encountered and fixed?
As a caveat: I am using nvidia-docker:
docker run --runtime=nvidia --rm reinoldus/ontoemma:latest bash /ontoemma/run_emma.sh cuda
The docker-repo is here: https://github.com/reinoldus/ontoemma
The config I am using to train is attached bellow.
Hi,
I am trying to use your code in order to reproduce the results, but I hit kind of a brick wall. I managed to deploy the application on a CPU-only system, where I get around 2 to 3 it/s.
Now I am trying to use a GPU (Quadro P4000), but I do not get any speed ups at all, I remain at 1 it/s, while the GPU is running at full power. And after a while I get an out of memory error (8GB).
Is this something you also encountered and fixed?
As a caveat: I am using nvidia-docker:
docker run --runtime=nvidia --rm reinoldus/ontoemma:latest bash /ontoemma/run_emma.sh cuda
The docker-repo is here:
https://github.com/reinoldus/ontoemma