-
Hi! I've been experimenting with this model for a few things, and so far I like where it's going.
I want to attempt some fine-tuning, so I followed the same notebook found here: https://github.com/…
-
### 软件环境
```Markdown
- paddlepaddle:
- paddlepaddle-gpu: 3.0.0b1
- paddlenlp: 3.0.0b0.post20240814
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
```Markdown
以动态图方式执行推理,出现属性报错…
-
1.PARE: Part Attention Regressor for 3D Human Body Estimation(2021)
img-->volumetric features(before the global average pooling)-->part branch: estimates attention weights +feature branch: performs S…
-
I am trying to run quantization for int4 examples from `examples/3.x_api/pytorch/nlp/huggingface_models/language-modeling/quantization/weight_only` but there is a package missing in the requirement.tx…
-
Hello, I tried to train Llama3.2 3B. It's a full finetune, not a lora, but Unsloth always crashes under varying conditions when the model should be saved. Hardware was runpod in all cases, different c…
-
I am running onediff/benchmarks/image_to_video.py
### Your current environment information
Collecting environment information...
PyTorch version: 2.1.0a0+29c30b1
Is debug build: False
CUDA …
-
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
@torch.compile(backend="turbine_cpu")
def test_gpt2_demo():
tokenizer = AutoTokenizer.from_pretrained("gp…
-
Hi, I am getting following error during inference after the training is completed.
File "v2_main.py", line 156, in
generated_ids = model.generate(**inputs, max_new_tokens=40)
File "/home/u…
-
Running this command
```
CUDA_VISIBLE_DEVICES=0 python3 model/llama.py ShiftAddLLM/Llama-2-70b-wbits2-acc
```
I'm seeing this error
```
CUDA extension not installed.
Loading checkpoin…
-
运行插件后出现This modeling file requires the following packages that were not found in your environment: flash_attn. Run `pip install flash_attn`这个报错,我用pip install flash_attn尝试安装这个环境,但是得到了这样一串报错:
C:\User…