Open evercherish opened 8 years ago
Hi, I have a GTX860M with CUDA 7.5 installed on my Windows system. But when I use flickr8 dataset, it takes ~5 seconds for a batch. I installed caffe with cuDNN v4 support. Did you do anything else while building Caffe?
The vocabulary size is different for the two datasets, Flickr30k's vocab_size is larger than Flickr8k's, given the same word_threshold. And the vocabulary size also contributes to the total number of the parameters.
I didn't do anything else...
The only difference is probably my Ubuntu14.04 system...
发件人: Animesh Pandey notifications@github.com 发送时间: 2016年3月7日 3:52 收件人: karpathy/neuraltalk 抄送: evercherish 主题: Re: [neuraltalk] when i train a batch(batchsize 100) of flickr8k, it takes around 0.8 seconds, (#43)
Hi, I have a GTX860M with CUDA 7.5 installed on my Windows system. But when I use flickr8 dataset, it takes ~5 seconds for a batch. I installed caffe with cuDNN v4 support. Did you do anything else while building Caffe?
― Reply to this email directly or view it on GitHubhttps://github.com/karpathy/neuraltalk/issues/43#issuecomment-193081246.
Of course the speed of the CPU core is also a key feature. And this code does not support GPU. Please check neuraltalk2 instead, which is a re-implementation of the code based on Torch, with GPU support. https://github.com/karpathy/neuraltalk2
when i train a batch(batchsize 100) of flickr8k, it takes around 0.8 second. However when train a batch(still 100) of flickr30k, it takes around 7 seconds. Why?