-
Hello,
I implemented a brutally simple infinite-width model, calling the kernel_fn with a batch of a single vector.
When I run this on CPU, I don't run into any exorbitant memory issues.
How…
-
We want to remove the [ReLU operations](https://github.com/sony/model_optimization/blob/8461a41fa8b6b57ce422fdd5f7301e9a9c8a1b20/tutorials/mct_model_garden/models_pytorch/yolov8/yolov8.py#L269-L272) i…
-
In the line 196,
`gate_network = tf.contrib.layers.fully_connected(
inputs=input,
num_outputs=subexpert_nums,
activation_fn=tf.nn.relu, \
weights_regularizer=l2_r…
-
"ReLU" layers are currently unsupported.
"DepthwiseConv2D" layers are currently unsupported.
-
Hi there!
In reading your paper, I noticed that Lemma 3.3 is for networks with ReLU activations and it says that the Lipchitz constant used in Lemma 3.1 can be replaced by the maximum norm of direc…
-
**Describe the current behavior**
A clear and concise explanation of what is currently happening.
**Describe the expected behavior**
A clear and concise explanation of what you expected to ha…
-
### Describe the issue
We are trying to quantize our proprietary model based on RetinaNet using TensorRT's model optimization library. The following warning was raised: **"Please consider running pre…
-
cpu 可以正常训练
已安装cuda 2.0.0+cu118
gpu显示已经启用
[INFO 2023-08-02 14:31:54,016 _log_device_info:1798] GPU available: True, used: True
PS D:\CnOCR> cnocr train -m densenet_lite_136-fc --index-dir data/im…
-
Some strange code appears in auto-generated file "*_pnnx.py", it seems like using libtorch api in python file.
my code:
```
import torch
import torch.nn as nn
import pnnx
class TestModel(nn.…
-
**Describe the bug**
It seems as if Relu nodes that immediately follow Conv nodes are getting dropped during quantization (if included in ops_to_quantize). If I understand things correctly, then this…