Closed johnsGuo closed 2 years ago
Any update? I have the same issue with multiple inputs organized as a dict, the error is like
tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute inference_pruned_11313 as input #0(zero-based) was expected to be a int64 tensor but is a string tensor [Op:inference_pruned_11313]
My inputs is
inputs = {"10001": np.array(BATCH_SIZE).astype(np.int64), "10002": np.array([1]).astype(np.int64), ...}
"10001" and "10002" are the input names defined in savedmodel. There isn't any string tensor at all, it's very confusing.
@DEKHTIARJonathan Looking forward to your help, greatly appreciated.
@MuYu-zhi what is likely to happen is that the "string" tensors are your dictionary keys ;)
data = {
"input_A": 1,
"input_B": 2,
"input_C": 3,
}
for x in data:
print(f"x = `{x}`")
>>> x = `input_A`
>>> x = `input_B`
>>> x = `input_C`
https://www.online-python.com/SgNJL0pjbv
We use the following for our benchmarks:
def engine_build_input_fn(num_batches, model_phase):
dataset, _ = get_dataset() # you need to implement a dataloader in `get_dataset`
for idx, data_batch in enumerate(dataset):
x, y = data_batch
if not isinstance(x, (tuple, list, dict)):
x = [x]
yield x
if (idx + 1) >= num_batches:
break
@GuoGuiRong please use a more recent container that nvcr.io/nvidia/tensorflow:19.12-tf2-py3
it's really outdated.
Closing because the issue is very old
@DEKHTIARJonathan Hi, thanks for your reply. But I still have no idea how to write the input_fn with input as kv format. I have tried
req = {"dense_input": make_tensor_proto(dense_input, shape=(dense_input.shape)),
"sparse_ids_input": make_tensor_proto(sparse_ids_input, shape=(sparse_ids_input.shape)),
"sparse_wgt_input": make_tensor_proto(sparse_wgt_input, shape=(sparse_wgt_input.shape)), }
raise the same error as my first try.
I also found demo as follow in tf-trt document
def my_input_fn():
Inp1 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
inp2 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
yield (inp1, inp2)
but I have multiple inputs, in my test, the input order after trt conversion is not deterministic.
Any Suggestions?
use nvcr.io/nvidia/tensorflow:19.12-tf2-py3 in docker my model is
I want to know how to write my input_fn when I conver it to trt, my code is blew but not ok :