-
hey @steveash ! The tutorial looks really interesting, but you've placed a huge dependency on using AWS. Many (if not) most academic researchers do not have accounts here provided by an institution. C…
vsoch updated
6 years ago
-
Namaste!
I Am Sebastian, an indologist/buddhologist from Germany who is mainly working with Sanskrit and Tibetan bilingual texts (translation of Buddhist material). I am currently thinking about ways…
-
### Description
Trying to train Transformer model, After following the instructions, I ran the training command, and the training stuck at the beginning at "saving checkpoint" phase:
INFO:tensorf…
-
If I set `batch_size` too high, the training fails with Out Of Memory error. This is expected. The annoying point is that the failure can happen e.g. after 3 hours of training (15k steps) or even **af…
-
### Description
I want to reproduce then EN-DE translation task with universal transformer.
My training setting is
python t2t-trainer \
--data_dir=$DATA_DIR \
--worker_gpu=8 \
--pr…
-
### Description
hello everyone,
I'm a newbie with t2t and tensorflow. I tried to use t2t to run transformer_moe model in 2 machines ,but failed. Each one has only one gpu. Hope you guys could help…
-
Ubuntu 18.04
Python 3.7.9
Tensorflow 2.3.1
When I follow https://github.com/magenta/magenta/blob/master/magenta/models/score2perf/README.md,
The problem happens when I follow **Training** and …
-
### Description
Hi, I'm new to t2t and I'm trying to follow this [poetry generation example](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive…
-
How do I fix this?
(magenta) usuario@Strix:~/Escritorio/train$ t2t_decoder --decode_hparams="${DECODE_HPARAMS}" --decode_interactive --hparams="sampling_method=random" --hparams_set=${HPARAMS_SET} …
-
### Description
I run this command 't2t-trainer --problem=librispeech --model=transformer --data_dir=~/dataset/t2t/librispeech/ --output_dir=. --hparams_set=transformer_librispeech --worker_gpu=1' an…