-
### Describe the workflow you want to enable
I suggest adding new transformers to scikit-learn named `LogTransformer` and `LogWithShiftTransformer`, which would add the functionality of applying a lo…
-
### Feature request
Optimize Transformers' image_processors to decrease image processing time, and reduce inference latency for vision models and vlms.
### Motivation
The Transformers library relie…
-
1.Public code and paper link:
I have installed the following code: [https://github.com/AILab-CVC/GroupMixFormer](url)
paper link : [https://arxiv.org/abs/2311.15157](url)
2. What does this work d…
-
I want to run [sft](https://github.com/huggingface/peft/tree/main/examples/sft) example and I get some erros, Can you help me to find the problem?
I run [run_peft_fsdp.sh](https://github.com/huggin…
-
Hi:
we're trying to summarize smooth quant llama, but was reported:
```
Loading checkpoint shards: 100%|████████████████████████████████████████████████████| 3/3 [00:13=4.36
and torch>=2.1.1 t…
-
When building like this:
```
jetson-containers build llama-vision
```
```
-- L4T_VERSION=36.4.0
-- JETPACK_VERSION=6.1
-- CUDA_VERSION=12.6
-- PYTHON_VERSION=3.10
-- LSB_RELEASE=22.04 (ja…
-
# Supervised Transformer Network for Efficient Face Detection #
- Author: Dong Chen, Gang Hua, Fang Wen, Jian Sun
- Origin: https://arxiv.org/abs/1607.05477
- Related:
-
### Feature request
It would be nice to combine the benefits of flex attention and 4d masking.
Perhaps the llama model could be a first case, allowing arbitrary 4d masks to be handled via an effic…
-
### Background
Currently, the project supports various hardware accelerators such as GPUs, but there is no support for NPUs. Adding NPU support could significationly benefit users who have access to …
-
### 論文へのリンク
[[arXiv:2005.14187] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing](https://arxiv.org/abs/2005.14187)
### 著者・所属機関
Hanrui Wang, Zhanghao Wu, Zhijian Liu…