-
I just added a topic on the Huggingface forum about limitations that I found while trying out the Huggingface Optimum on text classification and text summarization tasks.
https://discuss.huggingfac…
-
Nim's exception handling is currently tied to Nim's garbage collector and
every raised exception triggers an allocation. For embedded systems this is
not optimal, see http://www.open-std.org/jtc1/sc…
-
Hi, thanks for the well-organized repository.
I've been following the classification tutorial that prunes and finetunes ResNet50 for Imagenette. The pruning seemed to have worked and both PTH and O…
-
Hi,
Could you please clarify the difference between `end_pruning_step` and `policy_end_step` in the pruning config file (for example: https://github.com/IntelLabs/Model-Compression-Research-Package/b…
-
Hi, I find this project in [XNNPack](https://github.com/google/xnnpack)'s main page, since it declared to support sparse inference( for example, model obtained by unstructured pruning ) and mentioned…
-
Hi! I'm working on model compression, and when running simplify on a pretrained efficientnet_lite0 model, I have run into some issues. I'm trying to run it all on Conv2D layers after pruning with pyto…
-
# TorchServe Model Analyzer
## Tasks
Milestone 1
* [x] #1442
* [x] #1573
* [x] #1484
* [x] #1259
* [x] #1540
Milestone 2
* [x] #1506
* [x] #1504
* [ ] AutoML for inference
## Pr…
-
**Describe the bug**
When using a recipe, using the parameter mask_type: unstructured works as expected:
```yaml
- !GMPruningModifier
mask_type: unstructured
```
But according to…
-
Hi, I wonder if sparseml provides tools to calculate the memory consumption per sample? I mean the feature map size in every layer. For the standard feature map size calculation I know there are some …
-
Hi, I tried the yolov5 tutorials with --recipe = yolov5s.pruned_quantized.md
The QAT works as I can see fake quantized modules in netron but the number of parameters remain unchanged.
Are there any…
MrOCW updated
2 years ago