-
Dear all,
I'm trying to export a Pytorch model using torch.onnx.exporter. Since I have a costum layer in the model, I wrote the symbolic function for this module and everything worked. To export t…
-
# 【PaddlePaddle Hackathon 4】模型套件开源贡献任务合集
(此 ISSUE 为 PaddlePaddle Hackathon 第四期活动的任务 ISSUE,更多详见 [【PaddlePaddle Hackathon 第四期】任务总览](https://github.com/PaddlePaddle/Paddle/issues/50629))
注:开发请参考 [贡…
-
Hello everyone,
I am training a Lenet model built using brevitas. After training, I use `merge_bn` to merge BatchNorm layers. But I notice a 5~7% drop in test accuracy between the merged & unmerged m…
-
Hello to everyone, for academic and research purposes I am trying to understand the operation behind a quantized convolution layer in Tensorflow Lite. For this purpose, I chose EffiecientNet-lite0 mod…
-
Currently, in [Q4_0](https://github.com/ggerganov/ggml/pull/27) quantization we choose the scaling factor for each 32 group of weights as `abs(max(x_i))/7`. It is easy to see that this is suboptimal.
…
-
Please make sure that this is a feature request. As per our [GitHub Policy](https://github.com/tensorflow/tensorflow/blob/master/ISSUES.md), we only address code/doc bugs, performance issues, feature …
-
## 🚀 Feature
To reduce training memory, we want to quantize the saved tensors. Instead of storing the original tensor for backward propagation, we want to store a quantized tensor. We proposed an app…
-
~enhancement
Alongside a description of your problem/question/feature suggestion, please also include a support log ID.
In order to facilitate users to manage documents at any time, it is sometim…
-
Hello,
Thank you very much for sharing the project. I am interested in using torchPQ inside a deep nets (implemented in pytorch) where in each forward pass, I will call torchPQ. I was wondering is …
-
Consider the two definitions of DGM4NLP:
(1) The model can generate texts (Transformer, GPT3, ...)
(2) In addition to (1), given the same input, the model can generate a set of different texts (VA…