Open ashwin opened 7 years ago
You can use this.
import numpy as np import os, time from caffe2.proto import caffe2_pb2 from caffe2.python import workspace
workspace.ResetWorkspace()
device_opts = caffe2_pb2.DeviceOption() device_opts.device_type = caffe2_pb2.CUDA device_opts.cuda_gpu_id = 0
init_def = caffe2_pb2.NetDef() with open('init_net.pb', 'r') as f: init_def.ParseFromString(f.read()) init_def.device_option.CopyFrom(device_opts) workspace.RunNetOnce(init_def)
net_def = caffe2_pb2.NetDef() with open('predict_net.pb', 'r') as f: net_def.ParseFromString(f.read()) net_def.device_option.CopyFrom(device_opts) workspace.CreateNet(net_def, overwrite=True)
workspace.FeedBlob('data', np.random.rand(1, 3, 227, 227).astype(np.float32), device_opts)
num_iters = 1000 start = time.time() for i in range(num_iters): workspace.RunNet(net_def.name, 1) end = time.time() print('Run time per RunNet: {}'.format((end - start) / num_iters))
@ChaoHuangUestc Thanks, this works. However, the net_def.ParseFromString()
takes several minutes to parse parameters file of real-world models (like GoogleNet)! :smile:
I ended up changing the Python workspace.RunNetOnce
interface to accept DeviceOption
with GPU set in it.
@ashwin Hi, I have the same problem with you! It is easy to change to GPU mode in caffe, but I really do not know how to make my translated Caffe2 model ran in GPU , how do you change your workspace.RunNetOnce to make it accept DeviceOption? I am a beginner of Caffe2, and thank you for your help!
Another option is to:
workspace.RunNetOnce(init_net)
for name in workspace.C.blobs():
blob = workspace.FetchBlob(name)
workspace.FeedBlob(name, blob, device_option=core.DeviceOption(caffe2_pb2.CUDA, 0))
This way all the blobs get copied over to the gpu and you can run the predict net without problems on the GPU.
if I use workspace.Predictor, does it use CPU by default? Cause I did something like this:
`with open(INIT_NET) as f: init_net = f.read() with open(PREDICT_NET) as f: predict_net = f.read() workspace.RunNetOnce(init_net) for name in workspace.C.blobs(): blob = workspace.FetchBlob(name) workspace.FeedBlob(name, blob, device_option=core.DeviceOption(caffe2_pb2.CUDA, 0))
workspace.CreateNet(predict_net, overwrite=True)
p = workspace.Predictor(init_net, predict_net) and I ran
out = p.run([img])`
But it seems it still only use CPU to run... @willyd
You can also move a particular net to GPU after loading:
train_net.RunAllOnGPU()
I wish there was better documentation so I didn't have to chase down the code all the time.
And even when chasing down the code, a lot of python indirection magic is used, making it harder to know where to actually find the particular name you're looking for.
I was able to convert a Caffe model to Caffe2 using
caffe_translator.py
. The model can be loaded and tested using Python interface of Caffe2. By default, this is all done on CPU and it works.To test on GPU I set the
device_option.device_type = caffe2_pb2.CUDA
and set thedevice_option.cuda_gpu_id
too. I use this device_option to pass toworkspace.RunNetOnce
(for the init_net) and beforeworkspace.CreateNet
(for the predict_net). When I run this model usingworkspace.RunNet
I get this error:It looks like the blobs of the model are still in CPU context though I set the device_option of the nets. How to change them to GPU context from Python?