Closed Bean95zx closed 2 years ago
I meet the same error. as I understand, the reason is the confict of library:
dataset==2.3.2 huggingface_hub==0.7.0 protoc >= 3.19.0 torch torchvision
I fix it by create new env. file attached below. requirements.txt
Full command:
conda create --name layoutlmv3 python=3.8.13
conda activate layoutlmv3
git clone https://github.com/microsoft/unilm.git
cd unilm/layoutlmv3
pip install -r requirements.txt
# install pytorch, torchvision refer to https://pytorch.org/get-started/locally/
pip install torch==1.10.0+cu111 torchvision==0.11.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# install detectron2 refer to https://detectron2.readthedocs.io/en/latest/tutorials/install.html
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu111/torch1.10/index.html
pip install -e .
Wish it help you.
Thanks @Bean95zx for reporting this and @LeNguyenGiaBao for the answer! I am closing this issue for now, since it is inactive.
finally found the solution to this....just comment out the lines with the error....
the reason is because the original code only adapts to layoutlm and layoutlmv2 ONLY and not for layoutlmv3. Just let transformers library take care of this, DO NOT reply on the AutoModel lib because they are not adapted for layoutlmv3....
Describe Model I am using (UniLM, MiniLM, LayoutLM ...):
/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects
--local_rank
argument to be set, please change it to read fromos.environ['LOCAL_RANK']
instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructionsFutureWarning, Traceback (most recent call last): File "examples/run_xfund.py", line 13, in
from layoutlmft.data import DataCollatorForKeyValueExtraction
File "/usr/data1/unilm/unilm-master/layoutlmv3/layoutlmft/init.py", line 1, in
from .models import (
File "/usr/data1/unilm/unilm-master/layoutlmv3/layoutlmft/models/init.py", line 1, in
from .layoutlmv3 import (
File "/usr/data1/unilm/unilm-master/layoutlmv3/layoutlmft/models/layoutlmv3/init.py", line 16, in
AutoConfig.register("layoutlmv3", LayoutLMv3Config)
AttributeError: type object 'AutoConfig' has no attribute 'register'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 387800) of binary: /home/speed/.conda/envs/layoutlm/bin/python
Traceback (most recent call last):
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/speed/.conda/envs/layoutlm/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
examples/run_xfund.py FAILED
Failures: