-
### Describe the bug
In the frameworks/transformers.py, when init a runnable, the code use torch to detect the framwork of transformer pretrain model.
But when we prepare the bento models, the def…
-
### System Info
on transformers master branch
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supp…
-
### Bug Description
Recently, @raghavdixit99 from LanceDB team contributed a multi-modal example. [PR](https://github.com/run-llama/llama_index/pull/10530). Which was also put out on social media b…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
Can you make a hook to make the supported games reads the files without repack? The games can read "data" folder, but only when I delete all other datax.dat (and if it's a decrypted version in some ga…
-
尝试过下载任何文件都是这样
(base) root@autodl-container-a42d11ae3c-3abcf0ec:~/autodl-tmp# aliyunpan
提示: 方向键上下可切换历史命令.
提示: Ctrl + A / E 跳转命令 首 / 尾.
提示: 输入 help 获取帮助.
aliyunpan:/ tty0013$ d /MongoDB数据库讲课笔记.pdf …
-
### System Info
Impacts many versions of transformers up to and including current.
### Who can help?
@ArthurZucker @amyeroberts
### Information
- [ ] The official example scripts
- [X…
-
Are the vision tokens directly concatenated with text tokens and then input into Steve? The projection layer is typically needed for vision-text alignments, such as LLaVA. So This confuses me.
-
After following the installation steps and downloading Coqui's model, I'm getting the following error when loading the model. What am I missing?
![image](https://github.com/kanttouchthis/text_gener…
-
Hi. I am using exactly the same code as yours in run_sft.sh:
```
#!/bin/bash
CUR_DIR=`pwd`
ROOT=${CUR_DIR}
export PYTHONPATH=${ROOT}:${PYTHONPATH}
VISION_MODEL=openai/clip-vit-large-pa…