-
Thanks for you code.
However, when I run "CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --master_port 10025 --nproc_per_node=1 tools/relation_train_net.py --config-file "configs/SHA_GCL_β¦
-
I've been testing running various finetuned versions of supported models on GKE. However, it gets stuck on ` Using the Hugging Face API to retrieve tokenizer config`
This are the full logs
```β¦
-
Hi, thanks for you amazing work!
I have some doubts on the concept, seed, mentioned in your paper.
Considering using an existing image as the input, how can I decide the appearance seed?
Also, is tβ¦
-
## π Bug
The error seems to be related to pixel_values being padded
```
WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
config.json: 100%|βββββββββββββββββββββββββββββ¦
-
### System Info
transformers-cli env
- `transformers` version: 4.24.0
- Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.10.1
- Pyβ¦
-
Hi there. Thanks for the great library!
I have one issue regarding the usage of Bert-based models. I trained different models finetuning them on my custom dataset (roberta, luke, deberta, xlm-roberβ¦
-
Hi,
I'm having issues running the script of RoBERTa (for the US dataset)
I ran this line
`!python run_language_modeling.py --output_dir=output_roberta_US --model_type=roberta --model_name_orβ¦
-
Submitting Author Name: Mike Mahoney
Submitting Author Github Handle: @mikemahoney218
Repository: https://github.com/Permian-Global-Research/rsi/
Version submitted:
Submission type: Standard
Ediβ¦
-
**Getting the following error when training w/ runpod.io using Joepenna dreambooth Jupyter notebook:**
Global seed set to 23
Running on GPUs 0,
Loading model from model.ckpt
LatentDiffusion: Ruβ¦
-
### π Describe the bug
''' checkpoint_path = './llama_relevance_results'
training_args = transformers.TrainingArguments(
#remove_unused_columns=False, # Whether or not to automatically rβ¦