-
We have an output dimension of 768 with the biggest model, but currently we cut at the string length of 3000 -> we should rather go on token size (maybe approx with NLTK tokenizer) on about the dimens…
-
I did everything according to the instructions
```
git clone https://github.com/black-forest-labs/flux
python -m venv .venv
source .venv/Scripts/activate
pip install -e “.[all]”
```
then run …
-
I'm not sure what's going on after setting up proper environments and test for the first inference tackling the single image input with LLaVA OneVision.
```
----------------------------------------…
-
Models
- [ ] https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2
- [ ] https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1
- [ ] https://huggingface.co/sentence-tra…
-
### Environment
```text
(aws_neuronx_venv_transformers_neuronx) ubuntu@ip-172:~/vllm$ python --version
Python 3.10.12
(aws_neuronx_venv_transformers_neuronx) ubuntu@ip-172:~/vllm$ pip list | grep …
-
### System Info
```Shell
accelerate==1.1.0
```
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Acce…
-
### System Info
@huggingface/transformers 3.0.0-alpha.19, Mobile Safari, iOS 17.6.1
### Environment/Platform
- [X] Website/web-app
- [ ] Browser extension
- [ ] Server-side (e.g., Node.js, Deno, Bu…
-
### System Info
```shell
I'm compiling a fine-tuned Llama 3.1 70B model with the below system info on an inf2.48xlarge machine. I'm using neuronX TGI 0.0.25 with AWS Sagemaker. I get the below err…
-
First of all: Thanks a lot for open-sourcing your work!
## What this issue is about
While trying to reproduce some of the results mentioned in your paper, I noticed that the pinned version of the…
-
Hi,
really thank you for this clear code. I wonder whether you plan to integrate this code into the Transformers trainer. in this way, we can run this code during evaluation directly with the Trans…