Open notsyncing opened 9 months ago
Looks like issue with torch.compile
itself, @cavusmustafa could you help here?
One of the torch dynamo partitions seem to be failing while handling the symbolic inputs. Needs more debugging to provide a sufficient fix.
@cavusmustafa any updates here?
We are planning to enable new LLM features with the next release. As part of the updates, we are working on a fix for this issue as well.
@cavusmustafa are there any updates on this?
I am facing the same issue when trying to compile tinyllama-1.1b-step-50k-105b
with openvino backend.
Ref. 132028
@cavusmustafa are there any updates on this? I am facing the same issue when trying to compile
tinyllama-1.1b-step-50k-105b
with openvino backend.
@anzr299 sorry for the delay, is it possible to share the full script to reproduce the issue?
OpenVINO Version
2023.3
Operating System
Fedora Silverblue 39
Device used for inference
GPU
Framework
PyTorch
Model used
llava-hf/llava-1.5-7b-hf
Issue description
Hello, I'm trying to use openvino with
torch.compile
to run inference of a LLaVA model with following code:and it will print the following error:
software versions:
hardware versions:
Step-by-step reproduction
No response
Relevant log output
No response
Issue submission checklist