koa-fin / sep

Code release for "Learning to Generate Explainable Stock Predictions using Self-Reflective Large Language Models" https://arxiv.org/abs/2402.03659
87 stars 16 forks source link

merge_peft_adapter #1

Closed BUILDERlym closed 5 months ago

BUILDERlym commented 6 months ago

Hi, when I tried to run the examples, a issus appeared in model = PeftModel.from_pretrained(model, peft_model_id) in _merge_peftadaper.py image

sia-watsonlee commented 5 months ago

Hello , make sure to install the right version of peft in requirements.txt. i think that would solve the problem.

BUILDERlym commented 5 months ago

Hello , make sure to install the right version of peft in requirements.txt. i think that would solve the problem.

I think it worked. But when i ran on RTX8000, it always cast OOM at _ppotrainer.step. When I tried to run on two gpus, trainer.train in _supervisedfintune would cause an error as follows

File "/miniconda3/envs/sep/lib/python3.11/site-packages/torch/nn/functional.py", line 3059, in cross_entropy return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)

RuntimeError: CUDA error: device-side assert triggered Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Did you run the exp on multi gpus?

koa-fin commented 5 months ago

Yes, we did run the exp on multi gpus.

Did you check if your packages are the same versions as those in requirements.txt? If I remember correctly, only specific versions of transformers and peft can work with the current codebase.

BUILDERlym commented 5 months ago

Yes, we did run the exp on multi gpus.

Did you check if your packages are the same versions as those in requirements.txt? If I remember correctly, only specific versions of transformers and peft can work with the current codebase.

thanks, i think it's PyTorch version, solved by setting as requirements.txt