-
## 🐛 Bug
Hello team,
Thanks for creating such an amazing engine. I ran Llama-3-8B-Instruct-q4f16_1-MLC in server mode with different batch sizes (2-128) but I still see my requests are being run …
-
### Actual behavior
```
Traceback (most recent call last):
File "/share_container/optfuzz/res/bugs/simple/res_undefined.py", line 49, in
compiled_after = compile_mod(relax.transform.LiftT…
-
╰─ make cpptest ─╯
[ 2%] Built target p…
-
### Actual behavior
```
Traceback (most recent call last):
File "/share_container/optfuzz/res/bugs/simple/bug_add_loop.py", line 51, in
mod = tvm.tir.transform.DefaultGPUSchedule()(mod)
…
-
I want to use TVM backend to compare runtime of TASO`s like above code.
```
def evaluate_runtime(onnx_model):
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)
lib = relay.b…
-
## 🐛 Bug
I hope to build Android app on SP 8+ phone, and hope to use it further in SP 8155 chip and other chips (used in car). Here are my operations:
1. complete the environment described …
-
When I set the value of use_tvm from False to True in line 270 of Single_step_main.py , Hierarchical_mm_tpm.py line 130 contains an error 'No module named 'tvm' '. When I tried to install the tvm pack…
-
Hi,
I'm trying to trace the vision encoder part of Meta's Segment Anything Model (SAM), and I'm encountering several errors during the trace process but it seems to be stuck now.
The script does…
-
Make sure FP16 is enabled in TVM and check its perf gain for Stable Diffusion (https://mlc.ai/web-stable-diffusion/).
-
Ensure the support of int8 in TVM and check its perf gain on Stable Diffusion (https://github.com/webatintel/tvm-web/issues/2) or other models.