Using distributed or parallel set-up in script?: yes
Who can help
@stas00 @sgugger
Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
[x] the official example scripts: (give details below)
[ ] my own modified scripts: (give details below)
The tasks I am working on is:
[x] an official GLUE/SQUaD task: (give the name)
[ ] my own task or dataset: (give details below)
This was captured by Cloud TPU tests (XLNet/MNLI/GLUE), but I think this behavior is model/dataset agnostic. Essentially, it seems that:
The training_args's __post_init__ method shouldconvert the log_level to -1 if it's set to 'passive' (which it is by default).
However in the end-to-end run_glue.py example, using parse_args_into_dataclasses() seems to not call __post_init__, as our tests are failing with:
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 554, in _mp_fn
main()
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 468, in main
data_collator=data_collator,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 295, in __init__
logging.set_verbosity(log_level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/utils/logging.py", line 161, in set_verbosity
_get_library_root_logger().setLevel(verbosity)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 1284, in setLevel
self.level = _checkLevel(level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 195, in _checkLevel
raise ValueError("Unknown level: %r" % level)
ValueError: Unknown level: 'passive'
Environment info
transformers
version:nightly
Who can help
@stas00 @sgugger
Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
The tasks I am working on is:
This was captured by Cloud TPU tests (XLNet/MNLI/GLUE), but I think this behavior is model/dataset agnostic. Essentially, it seems that:
training_args
's__post_init__
method should convert thelog_level
to-1
if it's set to 'passive' (which it is by default).run_glue.py
example, usingparse_args_into_dataclasses()
seems to not call__post_init__
, as our tests are failing with:To reproduce
Steps to reproduce the behavior:
Expected behavior