-
### 🐛 Describe the bug
''' checkpoint_path = './llama_relevance_results'
training_args = transformers.TrainingArguments(
#remove_unused_columns=False, # Whether or not to automatically r…
-
╭──────────────────────── Traceback (most recent call last) ─────────────────────────╮
│ /root/llm_riskraider/Baichuan-13B/cli_demo.py:169 in │
│ …
-
### System Info
python version: 3.11.9
transformers version: 4.44.2
accelerate version: 0.33.0
torch version: 2.4.0+cu121
### Who can help?
@gante
### Information
- [X] The official example sc…
-
[Local collaborative autoencoders](https://sci-hub.ru/https://dl.acm.org/doi/abs/10.1145/3437963.3441808)
[Local latent space models for top-n recommendation](https://sci-hub.ru/https://dl.acm.org/do…
-
Aloha BIOMOD2 community, first of all, I wanted to thank the developers for the work put into developing BIOMOD2, it has been an incredibly useful tool for my work.
Recently I have been experiment…
-
HI @tomaarsen,
Thanks a lot for your amazing work!
While running the `trainer.train()` cell in [the "getting_started.ipynb"](https://github.com/tomaarsen/SpanMarkerNER/blob/main/notebooks/getti…
-
https://github.com/ArrowLuo/CLIP4Clip/blob/508ffa3de39ba0563a03199c440ab602a72e9b6f/modules/modeling.py#L400
```
if self.training:
visual_output = allgather(visual_output, sel…
-
Running into this issue when trying to generate with over around 130 tokens in context on my M40. Generation works fine for small contexts, but errors out at larger contexts than around 130 or so. max…
-
If you (a) run the entire model and then (b) define a new stage and then (c) try to run that stage, you get this very unhelpful error:
``` R
> run("imsurvey", "analyze/involved_local_EA x donate")
Lo…
-
### System Info
CPU Architecture: x86_64
CPU/Host memory size: 1024Gi (1.0Ti)
GPU properties:
GPU name: NVIDIA GeForce RTX 4090
GPU mem size: 24Gb…