-
## Environment info
- `transformers` version: 3.0.2 (from pip)
- Platform: Linux-4.15.0-91-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.6
- PyTorch version (GPU?): not in…
-
In order to save the model, I have added this line after the training loop:
`tf.saved_model.save(model, os.path.join(FLAGS.output_dir, "1") )`
in order to get: assets, saved_model.pb and variables
…
-
`Looking in indexes: https://pypi.org/simple, https://pip:****@pip.ml.moodysanalytics.com/simple
Collecting bert-for-tf2
Using cached https://files.pythonhosted.org/packages/93/31/1f9d1d5ccafb5b8b…
-
This was a bit confusing as I thought this issue would be fixed from the update in the readme
```
***************New January 7, 2019 ***************
V2 TF-Hub models should be working now. See up…
-
for example:
ALBERT_PATH = "xxx" // a pretrained tfhub albert model
albert_layer = hub.KerasLayer(ALBERT_PATH , trainable=True)
-
Is multi-GPU support expected to be implemented anytime soon?
What are the conceptual changes that need to be done in order to make ALBERT train/fine-tune on multiple GPUs?
Should `TPUEstimator` be …
-
I am having and 1660Ti with 6GB memory but still when i check the usage of the GPU it is using only 2 to 4 % can you tell me why this is happening or is there a way i can make it use my GPU
I am …
-
I convert tf weight to pytorch weight ,and on QQP dataset, I only get 87% accuracy.
model: albert-base
epochs: 3
learning_rate; 2e-5
batch size: 24
max sequence length: 128
warmup_proportion: …
-
Hello!
Thank you for releasing the code for Albert!
Could you upload the pre-trained checkpoints for the 4 Albert models? I would like to run ```run_squad_sp.py``` directly for finetuning on SQuAD.…
-
I have a question regarding your experiment finetuning for SQuAd 2.0 with 4x Titan RTX 24 GB. How long was the total training time? I´m running the same experiment with 8x Tesla V100 16 GB which accor…