Closed szagoruyko closed 9 years ago
I believe you're loading cutorch after loading cltorch? Can you try loading cutorch first, and then loading cltorch? (cutorch monkey-patches the copy functions, and doesnt realize cltorch might have monkey-patched them too) (edit: that 'userdata' is a ClTensor, which is unrecognized by CudaTensor copy function)
aah missed that, it works now, MNIST classification on HD Graphics:
registering spatialconvolutionmm
Successfully loaded /opt/caffe/examples/mnist/lenet_iter_10000.caffemodel
module 'mnist' not found
conv1: 20 1 5 5
conv2: 50 20 5 5
ip1: 1 1 800 500
ip2: 1 1 500 10
Using Apple platform: Apple
Using device: HD Graphics 4000
nn.Sequential {
[input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> output]
(1): nn.SpatialConvolutionMM(1 -> 20, 5x5)
(2): nn.SpatialMaxPooling(2,2,2,2)
(3): nn.SpatialConvolutionMM(20 -> 50, 5x5)
(4): nn.SpatialMaxPooling(2,2,2,2)
(5): nn.View
(6): nn.Linear(800 -> 500)
(7): nn.ReLU
(8): nn.Linear(500 -> 10)
}
statefultimer v0.6
ConfusionMatrix:
[[ 973 0 1 0 0 0 1 2 3 0] 99.286%
[ 0 1130 2 1 0 0 1 1 0 0] 99.559%
[ 1 2 1026 0 0 0 0 2 1 0] 99.419%
[ 0 0 1 1003 0 3 0 0 3 0] 99.307%
[ 0 0 0 0 978 0 0 1 0 3] 99.593%
[ 2 0 0 8 0 879 1 0 2 0] 98.543%
[ 3 2 0 1 1 3 946 0 2 0] 98.747%
[ 0 2 6 1 0 0 0 1018 0 1] 99.027%
[ 1 0 2 1 0 1 1 0 967 1] 99.281%
[ 1 2 0 4 6 4 1 4 1 986]] 97.721%
+ average row correct: 99.048245549202%
+ average rowUcol correct (VOC measure): 98.12281191349%
+ global correct: 99.06%
Cool :-) By the way, we could modify the cutorch monkey-patching to allow loading in any order. But I guess I dont want to touch cutorch too much for now, in case I break something. You can see the monkey-patcher for cltorch here though: https://github.com/hughperkins/cltorch/blob/master/init.lua#L3-L34 Basically it saves the old functions, and calls those, unless it sees an incoming ClTensor, in which case it calls the new functions
Hmmm, hey thats a pretty nice demo script you have there.
there was a few things I had to patch in the script and in clnn, I can create issues on what needs to be done to support networks loaded with loadcaffe if you want
Yes please, sounds good :-)
I wanted to run a simple example with MNIST classification from here https://github.com/szagoruyko/loadcaffe/blob/master/examples/mnist_lenet.lua with cltorch, it runs fine after a few hacks but I cannot copy the output back to FloatTensor: