I run this command:
python app_stage1.py big --resume path/to/LGM/model_fp16.safetensors --condition_type $condition_type
and I put one depth image with prompt "a penguin with husky dog costume",
Then I get:
text_inputs = self.tokenizer(
prompt,
padding=True,
max_length=self.tokenizer.model_max_length,
truncation=True,
return_tensors="pt",
)
Traceback (most recent call last):
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 717, in convert_to_tensors
tensor = as_tensor(value)
RuntimeError: Could not infer dtype of NoneType
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "", line 1, in
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2538, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2644, in _call_one
return self.encode_plus(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2717, in encode_plus
return self._encode_plus(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 652, in _encode_plus
return self.prepare_for_model(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3207, in prepare_for_model
batch_outputs = BatchEncoding(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 210, in init
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 733, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (input_ids in this case) have excessive nesting (inputs type list where type int is expected).
I run this command: python app_stage1.py big --resume path/to/LGM/model_fp16.safetensors --condition_type $condition_type and I put one depth image with prompt "a penguin with husky dog costume",
Then I get: text_inputs = self.tokenizer( prompt, padding=True, max_length=self.tokenizer.model_max_length, truncation=True, return_tensors="pt", )
Traceback (most recent call last): File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 717, in convert_to_tensors tensor = as_tensor(value) RuntimeError: Could not infer dtype of NoneType
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "", line 1, in
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2538, in call
encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs)
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2644, in _call_one
return self.encode_plus(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2717, in encode_plus
return self._encode_plus(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 652, in _encode_plus
return self.prepare_for_model(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 3207, in prepare_for_model
batch_outputs = BatchEncoding(
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 210, in init
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/public/omniteam/yejr/conda_env/MVcontrol/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 733, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (
input_ids
in this case) have excessive nesting (inputs typelist
where typeint
is expected).Could you please tell me how to solve this?