I have tried exporting the onnx file of FB-OCC, but I face the following error during tracing at the custom op of QuickCumsumCuda specifically when torch.onnx.export while the feed-forward inference of the model does not have any issue:
File "/FB-BEV/mmdet3d/ops/bev_pool_v2/bev_pool.py", line 102, in forward_dummy
x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)
RuntimeError: _Map_base::at
This is how the error can be reproduced:
I had isolated the custom op QuickCumsumCuda in a separate class function as showin the following for the ease of reproducibility:
class Bev_Pool_v2(torch.nn.Module):
def __init__(self):
super(Bev_Pool_v2, self).__init__()
def forward(self, depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths):
x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)
x = x.permute(0, 4, 1, 2, 3).contiguous()
return x
def forward_dummy(self, data):
depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths = data
x = QuickCumsumCuda.apply(depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths)
x = x.permute(0, 4, 1, 2, 3).contiguous()
return x
I generate/feed-forward the random inputs, which does not yield any issue during model inference.
I have tried exporting the onnx file of FB-OCC, but I face the following error during tracing at the custom op of
QuickCumsumCuda
specifically whentorch.onnx.export
while the feed-forward inference of the model does not have any issue:This is how the error can be reproduced:
QuickCumsumCuda
in a separate class function as showin the following for the ease of reproducibility:Define the model and the input
model = Bev_Pool_v2().eval().cuda() model.forward = model.forwarddummy input = [depth, feat, ranks_depth, ranks_feat, ranks_bev, bev_feat_shape, interval_starts, interval_lengths]
Feed the input to the model
model(input_) print('feed-forward inference is done without errors.')
with torch.nograd(): torch.onnx.export( model, input, 'bev_pool_v2_USE.onnx',
export_params=True,