NVIDIA / apex

A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
BSD 3-Clause "New" or "Revised" License
8.17k stars 1.35k forks source link

torch2.0.1 No module named 'torch._six #1724

Open darrenwang00 opened 10 months ago

darrenwang00 commented 10 months ago

File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 164, in from apex import amp File "/opt/conda/lib/python3.8/site-packages/apex/init.py", line 11, in from . import amp File "/opt/conda/lib/python3.8/site-packages/apex/amp/init.py", line 1, in from .amp import init, half_function, float_function, promote_function,\ File "/opt/conda/lib/python3.8/site-packages/apex/amp/amp.py", line 5, in from .frontend import * File "/opt/conda/lib/python3.8/site-packages/apex/amp/frontend.py", line 2, in from ._initialize import _initialize File "/opt/conda/lib/python3.8/site-packages/apex/amp/_initialize.py", line 2, in from torch._six import string_classes ModuleNotFoundError: No module named 'torch._six'

Environment

PyTorch version: 2.0.1+cu118 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.4 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.27.4 Libc version: glibc-2.31

Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0] (64-bit runtime) Python platform: Linux-5.16.20-1.el7.bzl.x86_64-x86_64-with-glibc2.10 Is CUDA available: True CUDA runtime version: 11.7.99 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090 GPU 1: NVIDIA GeForce RTX 3090

Nvidia driver version: 530.41.03 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.5.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.5.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 16 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz Stepping: 7 CPU MHz: 3900.000 CPU max MHz: 3900.0000 CPU min MHz: 1200.0000 BogoMIPS: 5800.00 Virtualization: VT-x L1d cache: 1 MiB L1i cache: 1 MiB L2 cache: 32 MiB L3 cache: 44 MiB NUMA node0 CPU(s): 0-15,32-47 NUMA node1 CPU(s): 16-31,48-63 Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities

Versions of relevant libraries: [pip3] numpy==1.22.4 [pip3] pytorch-quantization==2.1.2 [pip3] torch==2.0.1+cu118 [pip3] torch-tensorrt==1.2.0a0 [pip3] torchaudio==2.0.2+cu118 [pip3] torchtext==0.11.0a0 [pip3] torchvision==0.15.2+cu118 [conda] mkl 2020.4 h726a3e6_304 conda-forge [conda] mkl-include 2020.4 h726a3e6_304 conda-forge [conda] numpy 1.22.4 py38h99721a1_0 conda-forge [conda] pytorch-quantization 2.1.2 pypi_0 pypi [conda] torch 2.0.1+cu118 pypi_0 pypi [conda] torch-tensorrt 1.2.0a0 pypi_0 pypi [conda] torchaudio 2.0.2+cu118 pypi_0 pypi [conda] torchtext 0.11.0a0 pypi_0 pypi [conda] torchvision 0.15.2+cu118 pypi_0 pypi

fengxuefx commented 9 months ago

watching👁

adarshxs commented 9 months ago

Any fixes? I ran into this error

EDIT: pip uninstall -y apex This fixed the issue for me since I wasn't using apex anyways

DogNick commented 9 months ago

help, save my ass !!!!!!

adarshxs commented 9 months ago

help, save my ass !!!!!!

pip uninstall apex 🤡

fengxuefx commented 9 months ago

i fix this problem by downloading torch of 1.10.3 in other place, and then copy the _six.py to 2.,and modify init.py of 2. of torch module: add 'from _six import ***' into init.py. but I don't know if it affects other functions...

whu-lyh commented 7 months ago

Just replace the original from torch._six import string_classes with string_classes = str. And the problem solved for me.

RevelationH commented 6 months ago

It seems like pytorch (>1.7) has abandoned the module "_six". The issue therefore exists. Perhaps you can install a previous version of Pytorch, or check if there is a new version of apex (Not sure about that).

RevelationH commented 6 months ago

It seems like pytorch (>1.7) has abandoned the module "_six". The issue therefore exists. Perhaps you can install a previous version of Pytorch, or check if there is a new version of apex (Not sure about that).

"_six" seems like serving to resolve the conflict of python 2 and python 3. Hope someone can provide a solution without editing code.

cxl973 commented 6 months ago

you could change the source code like this

# if isinstance(root, torch._six.string_classes):
    # ...

if isinstance(root, str): 
    ...
MlLearnerAkash commented 5 months ago

torch has stopped supporting six according to 94709. The work around of this is : comment the line and instead use str in place of string_class.

Casuallkk commented 4 months ago

torch has stopped supporting six according to https://github.com/pytorch/pytorch/pull/94709. For my case, I just replace from torch._six import container_abcs in _amp_state.py with import collections.abc as container_abcs, according to https://blog.csdn.net/qq_45281807/article/details/121843592. Also, I replace the original from torch._six import string_classes in _initialize.py with string_classes = str according to the answers above. That solves my problem.