Closed chainyo closed 1 year ago
Colab version says 4.20.1, which was the 22 June Release and should have the DeBERTaV2 config !
Are you sure about this?
Using the main GitHub branch, it installs 4.21.0.dev0 version, from which the ONNX conversion works. Not sure what the issue is.
I'm glad it solved your problem! :fireworks:
@ChainYo would love to take up CLIP if there's no one working on it yet?
@ChainYo I'd like to take up VisualBERT if no one is working on it yet?
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model...
Traceback (most recent call last):
File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module>
main()
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main
validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol)
File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs
session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"])
File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: https://github.com/openai/CLIP/pull/219 Does that change need to be included inside transformers as well?
Does that change need to be included inside transformers as well?
Yes, modeling files are often updated to work with ONNX or torch.fx for instance (as long as the changes are minimal).
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Sure, I"ll open the PR, happy to work on it
Hi @ChainYo, while converting the CLIP model to onnx, I'm getting this error, while it's validating the ONNX model-
Validating ONNX model... Traceback (most recent call last): File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/dhruv/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 107, in <module> main() File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/__main__.py", line 100, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/dhruv/Documents/code/transformers/src/transformers/onnx/convert.py", line 375, in validate_model_outputs session = InferenceSession(onnx_model.as_posix(), options, providers=["CPUExecutionProvider"]) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/Users/dhruv/Documents/code/transformers/.venv/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 395, in _create_inference_session sess.initialize_session(providers, provider_options, disabled_optimizers) onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ArgMax(13) node with name 'ArgMax_3468'
This is supposedly solved in the original repo by: openai/CLIP#219 Does that change need to be included inside transformers as well?
Do you want to work on this PR? If so open it and ping CLIP maintainer from Hugging Face, it should be cool. If not, just tell me I could try to open the PR.
Added the PR here: https://github.com/huggingface/transformers/pull/18515
added PR for OWLViT : https://github.com/huggingface/transformers/pull/18588
Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself
Hi!, just wondering when are all this new configs going to be included? Wich release! Great work, will try to add one or two myself
Hey @irg1008, it's integrated continuously with each transformers
release. If you are looking for a model that is not available in the last version, you can still install the package with the main branch:
pip install git+https://github.com/huggingface/transformers.git
DonutSwin
: #19401@ChainYo Hi, I would like to work on TrOCR
.
TrOCR and Donut are now supported per #19254
@ChainYo Hi, I would like to work on
TrOCR
.TrOCR and Donut are now supported per #19254
@RaghavPrabhakar66 Maybe there is another model you could implement?
Sure. I can work on ImageGPT
.
Can we re-open this? Please @sgugger :hugs:
@ChainYo After gaining some experience with ImageGPT
, I would like to work on CANINE
and DecisionTransformer
(if working on more than one model is allowed.)
@ChainYo would love to take up PoolFormer
if there's no one working on it yet?
@ChainYo After gaining some experience with
ImageGPT
, I would like to work onCANINE
andDecisionTransformer
(if working on more than one model is allowed.)
@RaghavPrabhakar66 Yes of course! :+1:
@ChainYo would love to take up
PoolFormer
if there's no one working on it yet?
I don't think so, it's open! :hugs: @BakingBrains
@ChainYo I was working on Canine
and was facing some errors while running the following command:
python -m transformers.onnx onnx --model="google/canine-s"
CanineOnnxConfig:
class CanineOnnxConfig(OnnxConfig):
@property
def inputs(self) -> Mapping[str, Mapping[int, str]]:
if self.task == "multiple-choice":
dynamic_axis = {0: "batch", 1: "choice", 2: "sequence"}
else:
dynamic_axis = {0: "batch", 1: "sequence"}
return OrderedDict(
[
("input_ids", dynamic_axis),
("token_type_ids", dynamic_axis),
("attention_mask", dynamic_axis),
]
)
@property
def default_onnx_opset(self) -> int:
return 13
def generate_dummy_inputs(
self,
preprocessor: "PreTrainedTokenizerBase",
batch_size: int = 1,
seq_length: int = 6,
num_choices: int = -1,
is_pair: bool = False,
framework: Optional[TensorType] = None,
tokenizer: "PreTrainedTokenizerBase" = None,
) -> Mapping[str, Any]:
batch_size = compute_effective_axis_dimension(
batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0
)
token_to_add = preprocessor.num_special_tokens_to_add(is_pair)
seq_length = compute_effective_axis_dimension(
seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add
)
dummy_inputs = [" ".join(["<unk>"]) * seq_length, " ".join(["<unk>"]) * (seq_length+3)] * batch_size
inputs = dict(preprocessor(dummy_inputs, padding="longest", truncation=True, return_tensors=framework))
return inputs
Error:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ โ
โ /usr/lib/python3.10/runpy.py:196 in _run_module_as_main โ
โ โ
โ 193 โ main_globals = sys.modules["__main__"].__dict__ โ
โ 194 โ if alter_argv: โ
โ 195 โ โ sys.argv[0] = mod_spec.origin โ
โ โฑ 196 โ return _run_code(code, main_globals, None, โ
โ 197 โ โ โ โ โ "__main__", mod_spec) โ
โ 198 โ
โ 199 def run_module(mod_name, init_globals=None, โ
โ /usr/lib/python3.10/runpy.py:86 in _run_code โ
โ โ
โ 83 โ โ โ โ โ __loader__ = loader, โ
โ 84 โ โ โ โ โ __package__ = pkg_name, โ
โ 85 โ โ โ โ โ __spec__ = mod_spec) โ
โ โฑ 86 โ exec(code, run_globals) โ
โ 87 โ return run_globals โ
โ 88 โ
โ 89 def _run_module_code(code, init_globals=None, โ
โ โ
โ /home/luke/dev/huggingface/transformers/src/transformers/onnx/__main__.py:180 in <module> โ
โ โ
โ 177 if __name__ == "__main__": โ
โ 178 โ logger = logging.get_logger("transformers.onnx") # pylint: disable=invalid-name โ
โ 179 โ logger.setLevel(logging.INFO) โ
โ โฑ 180 โ main() โ
โ 181 โ
โ โ
โ /home/luke/dev/huggingface/transformers/src/transformers/onnx/__main__.py:173 in main โ
โ โ
โ 170 โ โ if args.atol is None: โ
โ 171 โ โ โ args.atol = onnx_config.atol_for_validation โ
โ 172 โ โ โ
โ โฑ 173 โ โ validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outpu โ
โ 174 โ โ logger.info(f"All good, model saved at: {args.output.as_posix()}") โ
โ 175 โ
โ 176 โ
โ โ
โ /home/luke/dev/huggingface/transformers/src/transformers/onnx/convert.py:417 in โ
โ validate_model_outputs โ
โ โ
โ 414 โ โ โ onnx_inputs[name] = value.numpy() โ
โ 415 โ โ
โ 416 โ # Compute outputs from the ONNX model โ
โ โฑ 417 โ onnx_outputs = session.run(onnx_named_outputs, onnx_inputs) โ
โ 418 โ โ
โ 419 โ # Check we have a subset of the keys into onnx_outputs against ref_outputs โ
โ 420 โ ref_outputs_set, onnx_outputs_set = set(ref_outputs_dict.keys()), set(onnx_named_out โ
โ โ
โ /home/luke/dev/huggingface/transformers/venv/lib/python3.10/site-packages/onnxruntime/capi/onnxr โ
โ untime_inference_collection.py:200 in run โ
โ โ
โ 197 โ โ if not output_names: โ
โ 198 โ โ โ output_names = [output.name for output in self._outputs_meta] โ
โ 199 โ โ try: โ
โ โฑ 200 โ โ โ return self._sess.run(output_names, input_feed, run_options) โ
โ 201 โ โ except C.EPFail as err: โ
โ 202 โ โ โ if self._enable_fallback: โ
โ 203 โ โ โ โ print("EP Error: {} using {}".format(str(err), self._providers)) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running Concat node. Name:'Concat_1713' Status Message: concat.cc:159 PrepareForCompute Non concat axis dimensions
must match: Axis 2 has mismatched dimensions of 5 and 4
@ChainYo I was working on Canine and was facing some errors while running the following command
Hey @RaghavPrabhakar66, it comes from how you preprocess the dummy_inputs
.
Before returning it print the shape of the dummy_inputs and check if they look like the expected inputs, you defined in the config.
Hi @ChainYo, I would like to take LED
and CvT
if there aren't folks working on them. ๐
Hi @ChainYo, I would like to take
LED
andCvT
if there aren't folks working on them. smiley
Go for it. Feel free to open a PR (one per architecture) once you are done with your implementation!
Hi @ChainYo, I added ONNX config for RemBERT in this PR. Please take a look and appreciate any guidance.
The ONNX export is now part of the optimum
library. For backward compatibility, we will keep what is inside Transformers for now but we won't add any new configs. We will just merge the PRs currently opened once all comments have been addressed, but we won't accept new ones in the Transformers code base.
Closing this issue here, if you want to work on ONNX export, I invite you to go on the optimum repo :-)
hi I'm working on Swin Transformer
Hi,
Swin is already supported as can be seen here. Also, all ONNX exports are now being discussed here: https://github.com/huggingface/optimum/issues/555
Please unsubscibe
On Sat, Feb 18, 2023, 7:41 PM NielsRogge @.***> wrote:
Hi,
Swin is already supported as can be seen here https://github.com/huggingface/transformers/blob/7f1cdf18958efef6339040ba91edb32ae7377720/src/transformers/models/swin/configuration_swin.py#L166. Also, all ONNX export is now being discussed here: huggingface/optimum#555 https://github.com/huggingface/optimum/issues/555
โ Reply to this email directly, view it on GitHub https://github.com/huggingface/transformers/issues/16308#issuecomment-1435649801, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL5G67G7JJJMJ6NUBGDMSNLWYCYN3ANCNFSM5RIQTGKA . You are receiving this because you commented.Message ID: @.***>
Thanks @NielsRogge I'm newcomer and about to start contributing to this repo :)
@RaghavPrabhakar66 Hi there. Were there any progress on CANINE here? If not, could you summarize what's the particularity out there that needed custom config?
Thanks!
@ozancaglayan Hi, Last time I worked on adding Canine
support, I got stuck as mentioned here.
I tried to work on in it this weekend and got two tasks (sequence classification and token classification) working but getting same error on tasks like QA etc.
I think its better I will open a PR in optimum
repo and move the conversation there.
ISSUE TRANSFER: Optimum repository -> https://github.com/huggingface/optimum/issues/555
This issue is about the working group specially created for this task. If you are interested in helping out, take a look at this organization, or add me on Discord:
ChainYo#3610
We want to contribute to HuggingFace's ONNX implementation for all available models on HF's hub. There are already a lot of architectures implemented for converting PyTorch models to ONNX, but we need more! We need them all!
Feel free to join us in this adventure! Join the org by clicking here
Here is a non-exhaustive list of models that all models available:
๐ ๏ธ next to a model suggests that the PR is in progress. If there is nothing next to a model, it means that ONNX does not yet support the model, and thus we need to add support for it.
If you need help implementing an unsupported model, here is a guide from HuggingFace's documentation.
If you want an example of implementation, I did one for CamemBERT months ago.