Closed BlackSamorez closed 2 weeks ago
I added compile_config=EdgeCompileConfig(_check_ir_validity=False)
to to_edge
and it appears to be exporting now.
Linking libaqlm.dylib
to executor_runner
(and replacing executorch
with executorch_no_prim_ops
in it's libs) I'm able to compile it.
However, running it, I'm encountering an error that goes like this:
E 00:00:00.001621 executorch:method.cpp:536] Missing operator: [0] aqlm::code2x8_lut_matmat.out
E 00:00:00.001623 executorch:method.cpp:724] There are 1 instructions don't have corresponding operator registered. See logs for details
I'm on executorch v0.3.0
.
@larryliu0820 any suggestions?
@digantdesai Hi! Thanks for the reply. I think we shifted the discussion to #4719 . In light of that, I'm closing this issue.
Hi!
I'm trying to integrate some of quantized MatMul C++ kernels into Executorch and I'm having a bad time: the documentation is very vague about what exactly I need to include/link for ATen to pick up my ops.
I would greatly appreciate any help in trying to make it work.
Overview:
Source code for the dynamic library containing the ops consists of 3 files:
lut_kernel.h
,lut_kernel.cpp
,lut_kernel_pytorch.cpp
. The files contain roughly this code:, which closely follows the executorch custom sdpa code.
I build it as two standalone dynamic libs: one
lut_kernel.cpp
with dependency only onexecutorch
andlut_kernel_pytorch.cpp
with additionaltorch
dependency. I load the latter lib into pytorch astorch.ops.load_library(f"../libaqlm_bindings.dylib")
.The problem:
I wrote a small
nn.Module
that basically just calls the op. In pytorch it works well.aten_dialect
for it looks like this:But when calling
to_edge
I get an error saying thatOperator torch._ops.aqlm.code2x8_lut_matmat.default is not Aten Canonical
.I don't conceptually understand how the
EXECUTORCH_LIBRARY
macro fromlut_kernel.cpp
supposed to make it Aten Canonical. Should I somehow recompile executorch to include my kernel?Thank you!