Closed sdalvi-quic closed 5 days ago
@fhossein-quic, @trahman-quic.
Hi @ramiro050, @ZihengJiang, I see that you have contributed to similar kind of issues before https://github.com/llvm/torch-mlir/issues/2523, (https://github.com/llvm/torch-mlir/issues/1151). I am running into similar issue for gpt2 model. Can you please help me with some pointer?
I see that you're using the old TorchScript importer. Have you tried using the FX importer? It should functionalize your model (remove mutation) before the model gets passed to Torch-MLIR. Here is the interface for the FX importer: https://github.com/llvm/torch-mlir/blob/eb7bf78a9c1e250949cf0151628f35fb0ac06903/python/torch_mlir/fx.py#L51
Thank you @ramiro050 I was able to use FX importer and lower it to linalgIR.
I am trying to lower GPT2 model to linalgIR but I am running into errors. I have built torch_mlir from source and have installed transformers with the latest version: pip install git+https://github.com/huggingface/transformers.
The test case I am running is:
The error that I am facing on enabling tracing i.e torch.jit.trace() is:
On running torch-mlir-opt:
When I tried to use torch-mlir-opt, it points to the line
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
in the following code :The error that I am facing on enabling scripting i.e running torch.jit.script is:
I tried to resolve this error but landed up into another error :
I feel both the error with scripting and with tracing points to the similar issue.
How do we resolve it? Since the error points to transformers package in the file transformers/pytorch_utils.py in Conv1D, it is giving error with other LLMs as well.