-
按照教程走没有发现不一样地方,运行训练代码报错、;
`(langgpt) root@intern-studio-50152060:~/InternLM/XTuner# xtuner train ./internlm2_chat_1_8b_qlora_alpaca_e3_copy.py
/root/.conda/envs/langgpt/lib/python3.10/site-packages/…
-
Hi there,
I was struggling on how to implement quantization on autoawq as you mentioned in home page. I was trying to quantize 7b qwen2 vl but no matter I use 2 A100 80Gb vram, I still get cuda oom…
-
Hi there!
First of all, thank you so much for all of your work and the time put into answering everyone's questions in the Issues section!
I've been trying to finetune Donut for French visual q…
-
## 論文リンク
- [arXiv](https://arxiv.org/abs/2103.11681)
## 公開日(yyyy/mm/dd)
2021/03/24
CVPR 2021
## 概要
## TeX
```
% yyyy/mm/dd
@inproceedings{
2021transformer,
title={Transforme…
-
### 論文へのリンク
[[arXiv:2006.03677] Visual Transformers: Token-based Image Representation and Processing for Computer Vision](https://arxiv.org/abs/2006.03677)
### 著者・所属機関
Bichen Wu, Chenfeng Xu,…
-
Hi,
I have finetuned Qwen2-VL using Llama-Factory.
I successfully quantized the fine-tuned model as given
```
from transformers import Qwen2VLProcessor
from auto_gptq import BaseQuantizeC…
-
Can anyone figure out how can I fix the error?
# ComfyUI Error Report
## Error Details
- **Node Type:** SamplerCustomAdvanced
- **Exception Type:** RuntimeError
- **Exception Message:** Boolean…
-
### Mod Loader (Required)
NeoForge
### Minecraft Version(s) (Required)
1.21.1
### Mod Version(s) (Required)
visualworkbench-v21.0.1-1.21-Neoforge
### Other Mods Involved (Required)
Yes
### No…
-
### Describe the bug
Hi all,
I am running a custom transformer (llava-style) based on Llama2 using PEFT, QLORA and FSDP.
It is runnable, but I get a strange error coming from WandB, where it seems t…
-
Traceback (most recent call last):
File "test.py", line 77, in
scores = predict_captions(model, dict_dataloader_test, text_field)
File "test.py", line 26, in predict_captions
out, _ =…