-
## error log | 日志或报错信息 | ログ
输出全为0或者NAN
## context | 编译/运行环境 | バックグラウンド
Window/Linux都一样
## how to reproduce | 复现步骤 | 再現方法
这是我的param
7767517
49 49
Input input_x:0 0 1 i…
-
### Describe the issue
I do a qat quantization on a cnn model, when a export it to onnx model, and got a slower inference than torchscript qat model.
the result is
torchscript: 4.798517942428589 …
-
when running `pytest test/python_fe` on latest version, it returns
```
graph.validate()
graph.build_operation_graph()
graph.create_execution_plans([cudnn.heur_mode.A, cudnn…
-
In the current implementation ReLU is called as a function after each convolution layer.
The guided back-propagation tutorial I can find online are applying the hook function when detecting the ReLU …
-
### System Information
OpenCV version 4.8.0 vs. 4.5.2 compiled from source
Operating System: Both Windows and Linux
Compiler GCC 11
### Detailed description
During regression testing between ve…
-
In seq2seq tutorial in the ["Simple decoder"](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#simple-decoder) scheme and code there is relu block after word embedding. It …
-
Relu is a quite simple function and I would expect that different definitions would all be optimized to the fastest calculation and thus that all different definitions would have the same speed. Howev…
-
I might be reading it incorrectly, but it looks like you don't apply the activation function to the final output layer? (should that be applied, in this context?)
-
## 🐛 Bug
I am observing a large divergence when using DeepLiftShap on a model with ReLU activations (or any type of activation) but not when using `torch.nn.Identity` instead. This is pretty puzzli…
-
### Expected behavior
expect to support qunatize leaky_relu when using relay.frontend.from_pytorch import a qat model
### Environment
tvm branch main for x86
### Steps to reproduce
…