-
How to use the mnist_nn_testsuite.json file when I perform model training (pytorch) based on the mnist dataset? thanks!
--- standalone, fate_version:1.7.1.1
-
A confusing/incorrect error is returned while trying to use the model file `mnist-1.onnx` from https://github.com/onnx/models/tree/main/validated/vision/classification/mnist/model
(The file `mnist-…
-
How to save the model in cpp api mnist.cpp?
model.save and torch::save(model,"mnisttrain.pkl")
All error
-
**Dubugging advice**
Converting TF model to onnx on s390x succeeds, but the resulting onnx file includes a large number 268632064 in Reshape operator.
python3 -m tf2onnx.convert --opset 15 --fold_…
-
### 1. System information
- Occurs in Google Colab w/ TF 2.14
- Have also verified w. TF 2.7 (Anaconda) on Windows 10
### 2. Code
[Colab to reproduce issue](https://colab.research.google.com…
-
### Description
Spaced would benefit from a image to text recognition feature, to select any part of the screen and collect the text present.
The model can be loaded on the backend and called …
-
Hi ,
I wondering if readers could be made aware of approx training time for a few of the models. Since it will mostly depend on hardware and software environment, maybe the hardware and software syste…
-
#### Issue Description
Basic 2 conv, 2 pooling, dense model for MNIST fails, but only in the snapshot build. It classifies everything as 0s (it works just fine in beta3).
When comparing the `Gr…
rnett updated
5 years ago
-
## 🐛 Bug
When functionalization is on (XLA_DISABLE_FUNCTIONALIZATION=0), I see that there are fewer aliased tensors. Jack has a patch to increase the number of aliased tensors https://github.com/py…
-
Error while running `onnc -mquadruple cortexm /tutorial/models/quantized_mnist/quantized_mnist.onnx --load-calibration-file=/tutorial/models/quantized_mnist/mnist_calibration.txt
`
_Fatal: Un…