-
decoder/attention_decoder/LinearProjection/LinearProjection/biases:0 [250] 250
decoder/attention_decoder/OrthoGRUCell/gates/weights:0 [506, 51…
-
Found while investigating some issues with Beaker's log handling. It appears that when I run experiments with `no_tqdm: true` in my `trainer`'s config JSON, I will still see TQDM output for the steps …
-
I have a kind of weird error (cuda out of memory exception). Although batch_size = 16, the mentioned exception appeared.
Is there any solution or workaround for this error??
And here is the termin…
-
When I try decode a file that was not part of the training/testing set, the following error occurs:
```
INFO:tensorflow:Performing Decoding from a file.
INFO:tensorflow:Getting sorted inputs
INFO:…
-
I have a corpus built out of Wikimedia Dump files stored at sentences.txt I have a sentence say 'नीरजः हाँ माता जी! स्कूल ख़त्म होते सीधा घर आऊँगा' Now a sample code for obtaining word vectors for the…
-
Is it possible to run the Walkthrough example from the website with other data than WMT?
I've tried changing the data paths in `wmt.py:`
```
_ENDE_TRAIN_DATASETS = [
[
"http://dat…
-
During inference, I'm not able to create a file containing the inference output.
I've tried ``--decode_to_file``, but no output file is being created...
-
**Context**
Special tokens are frequently used for masking or padding or interpreting the model. It's important in a Encoder/Decoder context that the decoder and encoder share the same indexes for EO…
-
I'm trying to train a bytes-to-subwords model:
```
def problem(model_hparams):
# This vocab file must be present within the data directory.
vocab_filename = os.path.join(model_hparams.da…
-
I added a new translation (new language pair) problem using my own dataset.
I also made vocab.-set using the dataset.
Training seems working since the loss had been dropped under 2.0
config set…