facebookarchive / caffe2

Caffe2 is a lightweight, modular, and scalable deep learning framework.
https://caffe2.ai
Apache License 2.0
8.42k stars 1.94k forks source link

Squeezenet Tutorial: Predictor outputs initialization information and exits #1352

Open mattm401 opened 7 years ago

mattm401 commented 7 years ago

Similar to #799, I am trying to run through the Loading Pre-Trained Models Tutorial with GPU/CUDA/Windows10: https://caffe2.ai/docs/tutorial-loading-pre-trained-models.html

The following code executes:

`device_opts = core.DeviceOption(caffe2_pb2.CUDA, 0) workspace.FeedBlob('data', img, device_option=device_opts)

init_def = caffe2_pb2.NetDef() with open(INIT_NET, 'rb') as f: init_def.ParseFromString(f.read()) init_def.device_option.CopyFrom(device_opts) workspace.RunNetOnce(init_def.SerializeToString())

net_def = caffe2_pb2.NetDef() with open(PREDICT_NET, 'rb') as f: net_def.ParseFromString(f.read()) net_def.device_option.CopyFrom(device_opts) workspace.CreateNet(net_def.SerializeToString())

print 'Running net...' p = workspace.Predictor(init_def, net_def)`

However, the system outputs a bunch of data about the model/predictor and then immediately exits without running the rest of the code or providing any additional information as to why the system has exited. Has anyone seen this behavior before?

image

mattm401 commented 7 years ago

tried running with core.DeviceScope(device) set to CPU and it produced the same output--so potentially unrelated to GPU.

mattm401 commented 7 years ago

Okay, I was able to capture the error:

`ERROR main Incompatible constructor arguments. The following argument types are supported:

  1. caffe2.python.caffe2_pybind11_state_gpu.Predictor(str, str) Invoked with: name: "squeezenet_init" op { output: "conv1_w" name: "" type: "GivenTensorFill" arg { name: "shape" ints: 64 ints: 3 ints: 3 ints: 3 } arg { name: "values"` floats: 0.257234632969
mattm401 commented 7 years ago

Running on CPU mode, the output is still the same but I cannot capture the error...

mattm401 commented 7 years ago

Worked this out in CPU Model, scrapped the .pb I had and re-pulled from the github models repo (updated init to exec in your code); however, in GPU mode. I get an error when switching to GPU/CUDA

blob->template IsType<TensorCPU>(). Blob is not a CPU Tensor: data

Does this suggest that Predictor only works on the CPU?

CriCL commented 6 years ago

hi @mattm401 could you solve "Blob is not a CPU tensor: data" issue?

I'm facing the same situation

mattm401 commented 6 years ago

I haven't had a chance to look into this particular issue. I ended up getting my application running in CPU mode and speed hasn't been an issue in my application.