Hello, I want to use this model to predict the semantic relatedness of Chinese sentence pairs.
I didn't change the model but just change the program to accept the Chinese word vector with 200dim and Chinese dependency tree, but when I run the "th relatedness/main.lua" the cpu usage at almost 1700% on a 24 core server. And the application running at a very low speed.
My machine has GPU, so I tried to change the model to work on GPU with just
require('cutorch')
require('cunn')
move model on gpu with :cuda()
change input and output data on gpu with :cuda()
but it always give errors, and I am new to torch. Could you please gives me any suggestions?
Hello, I want to use this model to predict the semantic relatedness of Chinese sentence pairs. I didn't change the model but just change the program to accept the Chinese word vector with 200dim and Chinese dependency tree, but when I run the "th relatedness/main.lua" the cpu usage at almost 1700% on a 24 core server. And the application running at a very low speed.
My machine has GPU, so I tried to change the model to work on GPU with just require('cutorch') require('cunn') move model on gpu with :cuda() change input and output data on gpu with :cuda()
but it always give errors, and I am new to torch. Could you please gives me any suggestions?
Thanks.