In the "Using Torch-TensorRT in C++" page, under the "Compiling with Torch-TensorRT in C++" section, the syntax of an older version of the library is being used:
Example
#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
...
mod.to(at::kCUDA);
mod.eval();
auto in = torch::randn({1, 1, 32, 32}, {torch::kCUDA});
auto trt_mod = torch_tensorrt::CompileGraph(mod, std::vector<torch_tensorrt::CompileSpec::InputRange>{{in.sizes()}});
auto out = trt_mod.forward({in});
torch_tensorrt doesn't have a CompileGraph method and torch_tensorrt::CompileSpec, moved now to torch_tensorrt::ts::CompileSpec, doesn't have an InputRange class.
The same result should be achievable right now with something like:
#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
...
mod.to(at::kCUDA);
mod.eval();
auto in = torch::randn({1, 1, 32, 32}, {torch::kCUDA});
torch_tensorrt::ts::CompileSpec spec({1, 1, 32, 32});
std::vector<torch_tensorrt::Input> inputs;
inputs.push_back(torch_tensorrt::Input(in));
spec.graph_inputs.inputs = inputs;
auto trt_mod = torch_tensorrt::ts::compile(mod, spec);
auto out = trt_mod.forward({in});
Note, however, that syntax should be updated also in subsequent code snippets.
Bug Description
In the "Using Torch-TensorRT in C++" page, under the "Compiling with Torch-TensorRT in C++" section, the syntax of an older version of the library is being used:
Example
torch_tensorrt
doesn't have aCompileGraph
method andtorch_tensorrt::CompileSpec
, moved now totorch_tensorrt::ts::CompileSpec
, doesn't have anInputRange
class.The same result should be achievable right now with something like:
Note, however, that syntax should be updated also in subsequent code snippets.