-
I export efficientnet b0/b1 with `--quantize=True` under tensorflow=2.0:
```
python ./tpu/models/official/efficientnet/export_model.py `
--model_name=efficientnet-b0 `
--ckpt_dir=${some_…
-
hi.
could some kinder helper tell me what do"128" in "Groupwise 4-bit (128)" indicatedin the "https://github.com/pytorch/executorch/tree/main/examples/models/llama2" ?
Thank u
-
Hi,
I want to convert and quantize Pytorch model to ONNX model. I refer to this example https://github.com/intel/neural-compressor/blob/master/examples/pytorch/image_recognition/torchvision_models/…
-
### 1. System information
- OS Platform and Distribution: WIndows 10
- TensorFlow installation: pip package
- TensorFlow library (version): 2.13.0
### 2. Code
```
import tensorflow as tf
…
-
2024-06-29 16:41:14,003 INFO [export-onnx.py:440] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_inte…
-
### Describe the bug
I tried to run weight-only quantization on OPT models by using scripts in examples/cpu/inference/python/llm
`OMP_NUM_THREADS=48 numactl -m 0 -C 0-47 python run.py --benchmark -…
-
### System Info
I am running on A100 with 40 GB GPU memory
### Who can help?
@SunMarc and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scri…
-
### Search before asking
- [X] I have searched the HUB [issues](https://github.com/ultralytics/hub/issues) and found no similar bug report.
### HUB Component
Export
### Bug
I trained an Object D…
-
### System Info
Linux
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examp…
-
Status: Draft
Updated: 06/17/2024
# Objective
In this doc we’ll talk about Tensor subclass based quantization API for modeling users and developers.
# Modeling User API
Modeling users refer t…