-
Hi,
we are training this on google colabs on GPU. But with LJspeech dataset it is taking lot of time. So we are thinking to utilize the tpu provided in colabs. We are trying to make changes to the co…
-
Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`…
-
**Description**
I find strange memory behaviour with updating layer wise learning rate decay on tpu. If the layer decay learning rate is set (`--layer-decay`), the memory usage just keep going up. Fo…
zw615 updated
2 years ago
-
Do you have a recommendation for which runtime configuration will produce a batch of images the fastest when running your notebook?
-
I use tf 1.14 、GeForce RTX 2080 Ti to train Retinanet.But this training is too slow.Is there a problem with my data?I am not sure.
- [I0710 09:58:11.515571 140230248699648 basic_session_run_hooks.py…
-
I've been testing running various finetuned versions of supported models on GKE. However, it gets stuck on ` Using the Hugging Face API to retrieve tokenizer config`
This are the full logs
```…
-
- GPU は週30h
- TPU は週20h
なので、これらを合わせて使用するために、コードをTPU対応にする。
-
Hi, I love your model and have used it to interpolate many videos on Colab! The best rate I've gotten on a T4 by interpolating 2x was about 14 new frames a second, which isn't bad, but I was wondering…
-
Hi,
I've seen some strange behavior when training on TPU (v3-8 from TFRC). After 600k steps (using the default parameters for a base model) training got stuck. I could see two different types of er…
-
I've added a jupyter notebook in the master branch based on the findings on Running Code on TPU instances. There weren't any errors but the instance keeps crashing.
I tried with batch_size=128, but e…