Open mkserge opened 3 years ago
@souptc may know why?
@RyanUnderhill , it seems the "./libonnxruntime_providers_shared.so" can't be resolved on this system, although the shared library already been copied to the python installation location. do you have any idea what should be the correct way to resolve the path?
btw, the message is for some experiment features, for your usage, i think it should still works fine. did you meet any issue when using onnxruntime?
Testing with CPUExecutionProvider it does work, however I am seeing the following warnings when converting the (torch) models to ONNX:
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
[There are many of these, trimming for brevity]
I am doing the conversion through
transformers.convert_graph_to_onnx.convert(
framework="pt",
model=args.model_in,
output=Path(output),
opset=11,
pipeline_name=args.pipeline
)
and later optimize it through
opt_options = BertOptimizationOptions('bert')
opt_options.enable_embed_layer_norm = False
opt_model = onnxruntime.transformers.optimizer.optimize_model(
output,
'bert',
num_heads=16,
hidden_size=1024,
optimization_options=opt_options)
opt_model.save_model_to_file(output)
Do you know what could be the reason for the above warning? (I just noticed that there is also onnxruntime.transformers.convert_to_onnx as well)
Will try on GPU soon.
@mkserge, would #7488 help?
@mkserge, would #7488 help?
I can confirm that a build from the chenta/fix_runtime_path branch resolves the issue. Thank you for your quick response!
BTW, I am also running into some missing dependencies from the compiled wheel. The packages coloredlogs
and sympy
are missing. I can open a separate issue if you prefer.
>>> from onnxruntime.transformers import optimizer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/miniconda/lib/python3.8/site-packages/onnxruntime/transformers/optimizer.py", line 21, in <module>
import coloredlogs
ModuleNotFoundError: No module named 'coloredlogs'
Also, any hints regarding
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
warnings during the conversion?
Thanks again, much appreciated!
You can install coloredlogs and sympy modules with pip. You can edit the dockerfile by changing the line:
RUN pip install --upgrade pip numpy &&\
to:
RUN pip install --upgrade pip numpy coloredlogs sympy &&\
You can install coloredlogs and sympy modules with pip. You can edit the dockerfile by changing the line:
RUN pip install --upgrade pip numpy &&\
to:RUN pip install --upgrade pip numpy coloredlogs sympy &&\
I know, of course 😄
But, wouldn't you expect that pip install onnxruntime
will install its dependencies? Currently the only way to discover that these dependencies are missing is to crash out with an exception.
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Hello,
I am building onnxruntime with tensorrt execution provider support from scratch following the Dockerfile below
After the build, I simply install the wheel with pip, start a Python shell and import onnxruntime, which results in the following.
Any idea what is happening here?
Here's the missing library in the filesystem
And here's the list of installed packages
Please note that the container is started on MacOS with no GPUs (although I intend to run it later on GPUs of course)
System information
Thank you,
S