-
### Actual behavior
```
Traceback (most recent call last):
File "/share_container/optfuzz/res/res_ut/res_executions/30_test.py", line 50, in
ex = relax.build(mod, target='llvm')
…
-
## 🐛 Bug
## To Reproduce
Steps to reproduce the behavior:
1. Install prebuilt Python package for windows as the [guideline](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages) …
-
你好,请问在执行conv2d_depth_profile.py时要用到perlayer_all_conv_case.csv,./data_results/perlayer_all_conv_case.csv文件是在哪生成的
-
Earlier implementation in Docker was download and install tvm using CUDA flags on:
```
# install tvm
RUN git clone --recursive https://github.com/apache/incubator-tvm tvm && \
cd tvm && \
git r…
-
The default interrupt handling priority described in Section 6.4. And with the Note below, RDSM will clear the bit corresponding to that interrupt controller selected by msdcfg.SDICN in msdeie prior t…
-
## 🐛 Bug
I use the jetson-containers of MLC and use Meta-Llama-3-8B-Instruct model . after I run ```
python3 -m mlc_llm.build \
--model Meta-Llama-3-8B-Instruct-hf \
--quantization q4f16_…
-
### Describe the issue
I am trying building TVM Execution Provider from this document:
https://onnxruntime.ai/docs/execution-providers/community-maintained/TVM-ExecutionProvider.html
however, a…
-
1
-
### Actual behavior
```
Traceback (most recent call last):
File "/share_container/optfuzz/res/bugs/reduced/complete/328_test.py", line 162, in
ex = relax.build(mod, target='llvm')
…
-
while loading model ,get
```
text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
Traceback (most recent call las…