Open H19012 opened 4 years ago
Hi Hipedyne19012,
Thanks for reaching out! You can serialize the TensorRT engine using the TensorRT python API as follows.
with open('model.engine', 'wb') as f:
f.write(model.engine.serialize())
The raw engine takes a single image as input, and produces two outputs. The first output is the part confidence map, the second output is the part affinity field.
You would need to post-process these outputs to obtain the pose estimates. The code to do so is in this folder. Currently, the post processing code still relies on PyTorch C++ extension API for binding with Python.
Please let me know if you have any questions.
Best, John
I'm also currently working on this. I get the output buffers from the engine and convert those to a torch tensor.
float *outcmapfloat = (float *)outputLayersInfo[cmapIndex].buffer;
torch::set_default_dtype(torch::scalarTypeToTypeMeta(torch::kInt16));
torch::Tensor outcmap = torch::tensor(*outcmapfloat);
When I call find_peaks_torch
and print the input sizes I only get [1]
. This means that I only have 1 dimension, so following fails on initializing C
. All data is probably on a 1 dimension vector if it comes out of the engine, how do I parse it? How is the parsing done to get a NxCxHxW
sized vector?
std::cout << input.sizes() << std::endl;
const int N = input.size(0);
const int C = input.size(1);
I already got rid of the python bindings and it compiles, but I feel I'm doing something wrong by getting the buffer and converting it to a torch tensor.
Anyone any luck with this?
Ok I think I found it, it is possible to get the dimensions from the inference engine (from the outputLayersInfo) and construct a tensor with a pointer to the buffer.
float *outpaffloat = (float *)outputLayersInfo[pafIndex].buffer;
torch::Tensor outpaf = torch::from_blob(outpaffloat, {N,C,H,W});
I want to run the pytorch model using DeepStream but DeepStream doesn't support pytorch. I want to try to run the TensorRT .engine or .plan of this pose model but not sure how to save the TensorRT model from ipynb in .engine or /plan format.