-
I had a chance to reflect after PTC / CUDA-MODE and wanted to share some thoughts on future plans for sparsity in torchao.
## **Current State**
There are two components of sparsity, accuracy and…
-
### The model to consider.
https://huggingface.co/SparseLLM/prosparse-llama-2-7b
### The closest model vllm already supports.
Llama
### What's your difficulty of supporting the model you want?
So…
-
I'm looking for guidance on how to test the sparsity of MLP and Attention layers. Could you provide some advice?
-
恭喜你们的工作被EMNLP接受,想请问你们4.4节中这个数据集是怎么划分的,有具体代码或者数据集吗?
-
Hello, I'm trying to train YOLOv8-large in int4 format. I took the training recipe available at [sparsezoo](https://sparsezoo.neuralmagic.com/models/yolov8-l-coco-pruned85_quantized?hardware=deepspars…
-
Hi team,
First of all, thanks to the team for working on building such a good package for us to use.
I follow the example _Counterfactual with Reinforcement Learning (CFRL) on Adult Census_ to …
-
Hello, I created a test script which I was testing on Aarch64 platform, for distilbert inference and using wanda sparsifier:
```
import torch
from transformers import BertForSequenceClassificatio…
-
Hi,
I am trying to prune Mistral 7B (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and while I was able to successfully run the commands for magnitude pruning, I was facing issues with…
-
- currently, basis functions for GLMM mode are selected via the following algorithm:
1. identify a maximum of $M = 5$ basis functions per individual subject
2. refit all subject-level basis …
-
This will serve as the main hub of issues across the tidymodels ecosystem, regarding the implementation of sparse data in tibbles.
right now we are still in the exploratory phase, with work happening…