When I rune llama2 exampLe here are some errors, how should I solve it ? The previous issue was not helpful to me and I still cannot solve this problem.
ModuleNotFoundError: No module named 'colossalai.kernel.op_builder'from colossalai.kernel.op_builder.layernorm import LayerNormBuilder
ModuleNotFoundError : from colossalai.kernel.op_builder.layernorm import LayerNormBuilderNo module named 'colossalai.kernel.op_builder'
Environment
Installation Report
------------ Environment ------------
Colossal-AI version: 0.3.3
PyTorch version: 1.13.1
System CUDA version: 11.3
CUDA version required by PyTorch: 11.7
Note:
The table above checks the versions of the libraries/tools in the current environment
If the System CUDA version is N/A, you can set the CUDA_HOME environment variable to locate it
If the CUDA version required by PyTorch is N/A, you probably did not install a CUDA-compatible PyTorch. This value is give by torch.version.cuda and you can go to https://pytorch.org/get-started/locally/ to download the correct version.
------------ CUDA Extensions AOT Compilation ------------
Found AOT CUDA Extension: ✓
PyTorch version used for AOT compilation: N/A
CUDA version used for AOT compilation: N/A
Note:
AOT (ahead-of-time) compilation of the CUDA kernels occurs during installation when the environment variable CUDA_EXT=1 is set
If AOT compilation is not enabled, stay calm as the CUDA kernels can still be built during runtime
------------ Compatibility ------------
PyTorch version match: N/A
System and PyTorch CUDA version match: x
System and Colossal-AI CUDA version match: N/A
Note:
The table above checks the version compatibility of the libraries/tools in the current environment
PyTorch version mismatch: whether the PyTorch version in the current environment is compatible with the PyTorch version used for AOT compilation
System and PyTorch CUDA version match: whether the CUDA version in the current environment is compatible with the CUDA version required by PyTorch
System and Colossal-AI CUDA version match: whether the CUDA version in the current environment is compatible with the CUDA version used for AOT compilation
🐛 Describe the bug
When I rune llama2 exampLe here are some errors, how should I solve it ? The previous issue was not helpful to me and I still cannot solve this problem.
ModuleNotFoundError: No module named 'colossalai.kernel.op_builder'from colossalai.kernel.op_builder.layernorm import LayerNormBuilder
ModuleNotFoundError : from colossalai.kernel.op_builder.layernorm import LayerNormBuilderNo module named 'colossalai.kernel.op_builder'
Environment
Installation Report
------------ Environment ------------ Colossal-AI version: 0.3.3 PyTorch version: 1.13.1 System CUDA version: 11.3 CUDA version required by PyTorch: 11.7
Note:
------------ CUDA Extensions AOT Compilation ------------ Found AOT CUDA Extension: ✓ PyTorch version used for AOT compilation: N/A CUDA version used for AOT compilation: N/A
Note:
------------ Compatibility ------------ PyTorch version match: N/A System and PyTorch CUDA version match: x System and Colossal-AI CUDA version match: N/A
Note: