Open omerwer opened 1 year ago
The input tensors do not allocate memory and do not copy data (both would be expensive) . They simply read the buffers that you supplied. Therefore, after the inference is done you can de-allocate your own buffers. They are needed during inference though.
The actual memory needed by internal C OrtValue objects is miniscule.
Ort::Value objects automatically release any resources that they own and that's their purpose.
The output tensors are allocated and ref-counted. You can reduce the refcount by destroying the output Ort::Value
objects by either of the following methods:
Run()
overload returns std::vector<Ort::Value>
you can call clear()
method of the vector which would call destructors on all of the contained objects. It is also true for your input tensors.Ort::Value ort_value; // existing value
ort_value = Ort::Value(nullptr);
However, internally, the system would still hold those allocated by default.
You can pre-allocate output Ort::Values and hold them in the vector, pass them to one of the Run()
overloads. Once they are no longer needed, clear the vector.
@yuslepukhin hello, I got problems when using Ort::Value on multiple inputs model. the code simple:
for (int i = 0; i < inputNames.size(); ++i) {
// override inputShapes batch dim with inputs
inputShapes[i][0] = inputs[i].shape[0];
auto input_tensor = Ort::Value::CreateTensor<float>(
memoryInfo, static_cast<float *>(inputs[i].data), inputs[i].get_size(),
inputShapes[i].data(), inputShapes[i].size());
assert(input_tensor.IsTensor());
ort_inputs.emplace_back(input_tensor);
}
I got compile time error when build on windows MSVC2019
C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include\xmemory(681,47): error C2280: “Ort::Value::Value(const Ort::Value &)”: 尝试引用已删除的函数
any idea?
Hi,
I'm running an application on a relatively weak machine, and therefore I need to free allocated data right after I use it.
I'm creating Ort::Value as input tensors with:
input_tensors.push_back(Ort::Value::CreateTensor<float>(memory_info, curr_data.data(), curr_data.size(), shape.data(), shape.size()));
But I can't find any function in the ORT API to free the allocated data.
Is there something in the API that frees the allocated data of Ort::Value::CreateTensor<>? and if not, how do I free the data?
Thanks