triton-inference-server / tensorrtllm_backend

The Triton TensorRT-LLM Backend
Apache License 2.0
588 stars 81 forks source link

Can't build docker image with Ryzen 5950x #385

Open mallorbc opened 3 months ago

mallorbc commented 3 months ago

System Info

x86 Ryzen 5950x v0.8.0 ubuntu 22.04 Rtx 3090

Who can help?

@byshiue @schetlur-nv

Information

Tasks

Reproduction

Run the following command after checking out v0.8.0

#!/bin/sh
# Update the submodules
cd tensorrtllm_backend
git lfs install
git submodule update --init --recursive

# Use the Dockerfile to build the backend in a container
# For x86_64
DOCKER_BUILDKIT=1 docker build --no-cache -t triton_trt_llm_main -f dockerfile/Dockerfile.trt_llm_backend .

Expected behavior

I expect it to be built successfully

actual behavior

=> ERROR [trt_llm_builder 4/4] RUN cd tensorrt_llm && python3 scripts/build_wheel.py --trt_root="/usr/local/tensorrt" -i -c && cd .. 981.7s

skip many lines....

3.8 [ 91%] Building CUDA object tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_bf16.cu.o 115.0 [ 91%] Building CUDA object tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_bf16_implicit_relative_attn.cu.o 115.7 [ 91%] Building CUDA object tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_float.cu.o 120.7 [ 91%] Built target common_src 120.8 [ 92%] Building CUDA object tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_float_implicit_relative_attn.cu.o 121.4 nvcc error : 'cudafe++' died due to signal 11 (Invalid memory reference) 121.4 nvcc error : 'cudafe++' core dumped 121.4 gmake[3]: [tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/build.make:10842: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_bf16.cu.o] Error 139 121.4 gmake[3]: Waiting for unfinished jobs.... 127.8 [ 92%] Built target runtime_src 141.3 nvcc error : 'ptxas' died due to signal 11 (Invalid memory reference) 141.3 nvcc error : 'ptxas' core dumped 141.3 gmake[3]: [tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/build.make:10436: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/beamSearchTopkKernels.cu.o] Error 139 194.2 In file included from tmpxft_0000390b_00000000-6_bf16_int8_gemm_per_col.compute_90.cudafe1.stub.c:1: 194.2 /tmp/tmpxft_0000390b_00000000-6_bf16_int8_gemm_per_col.compute_90.cudafe1.stub.c:57: internal compiler error: Segmentation fault 194.2 57 | static void device_stub__ZN7cutlass6KernelINS_4gemm6kernel11GemmFpAIntBINS1_11threadblock14DqMmaPipelinedINS1_9GemmShapeILi32ELi128ELi64EEENS_9transform11threadblock22PredicatedTileIteratorINS_11MatrixShapeILi32ELi64EEENS_10bfloat16_tENS_6layout8RowMajorELi1ENS8_29PitchLinearWarpRakedThreadMapINS_16PitchLinearShapeILi64ELi32EEELi128ENSH_ILi4ELi8EEELi8EEELi8ELb0ENSE_9NoPermuteEEENS9_19RegularTileIteratorISC_NS_6half_tENSE_42RowMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi0ESK_Li16EEENSA_INSB_ILi64ELi128EEEhNSE_11ColumnMajorELi0ENSG_INSH_ILi64ELi128EEELi128ESJ_Li8EEELi8ELb0ESL_EENSN_ISS_SO_NSE_45ColumnMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi1ESV_Li16EEENSA_INSB_ILi1ELi128EEESD_SF_Li0ENS8_30PitchLinearStripminedThreadMapINSH_ILi128ELi1EEELi16ELi8EEELi8ELb0ESL_EENSA_IS10_SO_SF_Li0ES13_Li8ELb0ESL_EEfSF_NS4_9MmaPolicyINS1_4warp16MmaVoltaTensorOpINS6_ILi32ELi32ELi64EEESO_SQ_SO_SY_fSF_NS17_17MmaTensorOpPolicyINS_4arch3MmaINS6_ILi16ELi16ELi4EEELi32ESO_SF_SO_ST_fSF_NS1B_13OpMultiplyAddEEENSB_ILi1ELi1EEEEEbEENSB_ILi0ELi0EEES1J_Li1EEENS_45FastInterleavedAndBiasedNumericArrayConverterISO_hLi64EEENS_21NumericArrayConverterISO_SO_Li8ELNS_15FloatRoundStyleE2ENS8_6thread14UnaryTransform8IdentityEEELNS_17WeightOnlyQuantOpE1EbEENS_8epilogue11threadblock8EpilogueIS7_S1I_Li1ENS1W_22PredicatedTileIteratorINS1W_26OutputTileOptimalThreadMapINS1W_15OutputTileShapeILi128ELi4ELi4ELi1ELi1EEENS20_ILi1ELi2ELi1ELi1ELi2EEELi128ELi8ELi16EEESD_Lb0ESL_Lb0EEENS1V_4warp29FragmentIteratorVoltaTensorOpIS19_NS6_ILi32ELi32ELi4EEEfSF_EENS25_25TileIteratorVoltaTensorOpIS19_S27_fSF_EENS1W_18SharedLoadIteratorINS23_18CompactedThreadMapEfLi8EEENS1V_6thread17LinearCombinationISD_Li8EffLNS2E_9ScaleType4KindE0ELS1O_2ESD_EENSB_ILi0ELi2EEELi1ELi1EEENS4_30GemmIdentityThreadblockSwizzleILi1EEENS1B_4Sm70ELb1EEEEEvNT_6ParamsE( _ZN7cutlass4gemm6kernel11GemmFpAIntBINS0_11threadblock14DqMmaPipelinedINS0_9GemmShapeILi32ELi128ELi64EEENS_9transform11threadblock22PredicatedTileIteratorINS_11MatrixShapeILi32ELi64EEENS_10bfloat16_tENS_6layout8RowMajorELi1ENS7_29PitchLinearWarpRakedThreadMapINS_16PitchLinearShapeILi64ELi32EEELi128ENSG_ILi4ELi8EEELi8EEELi8ELb0ENSD_9NoPermuteEEENS8_19RegularTileIteratorISB_NS_6half_tENSD_42RowMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi0ESJ_Li16EEENS9_INSA_ILi64ELi128EEEhNSD_11ColumnMajorELi0ENSF_INSG_ILi64ELi128EEELi128ESI_Li8EEELi8ELb0ESK_EENSM_ISR_SN_NSD_45ColumnMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi1ESU_Li16EEENS9_INSA_ILi1ELi128EEESC_SE_Li0ENS7_30PitchLinearStripminedThreadMapINSG_ILi128ELi1EEELi16ELi8EEELi8ELb0ESK_EENS9_ISZ_SN_SE_Li0ES12_Li8ELb0ESK_EEfSE_NS3_9MmaPolicyINS0_4warp16MmaVoltaTensorOpINS5_ILi32ELi32ELi64EEESN_SP_SN_SX_fSE_NS16_17MmaTensorOpPolicyINS_4arch3MmaINS5_ILi16ELi16ELi4EEELi32ESN_SE_SN_SS_fSE_NS1A_13OpMultiplyAddEEENSA_ILi1ELi1EEEEEbEENSA_ILi0ELi0EEES1I_Li1EEENS_45FastInterleavedAndBiasedNumericArrayConverterISN_hLi64EEENS_21NumericArrayConverterISN_SN_Li8ELNS_15FloatRoundStyleE2ENS7_6thread14UnaryTransform8IdentityEEELNS_17WeightOnlyQuantOpE1EbEENS_8epilogue11threadblock8EpilogueIS6_S1H_Li1ENS1V_22PredicatedTileIteratorINS1V_26OutputTileOptimalThreadMapINS1V_15OutputTileShapeILi128ELi4ELi4ELi1ELi1EEENS1Z_ILi1ELi2ELi1ELi1ELi2EEELi128ELi8ELi16EEESC_Lb0ESK_Lb0EEENS1U_4warp29FragmentIteratorVoltaTensorOpIS18_NS5_ILi32ELi32ELi4EEEfSE_EENS24_25TileIteratorVoltaTensorOpIS18_S26_fSE_EENS1V_18SharedLoadIteratorINS22_18CompactedThreadMapEfLi8EEENS1U_6thread17LinearCombinationISC_Li8EffLNS2D_9ScaleType4KindE0ELS1N_2ESC_EENSA_ILi0ELi2EEELi1ELi1EEENS3_30GemmIdentityThreadblockSwizzleILi1EEENS1A_4Sm70ELb1EE6ParamsE&par0){cudaLaunchPrologue(1);cudaSetupArg(par0, 0UL);cudaLaunch(((char )((void ( )( _ZN7cutlass4gemm6kernel11GemmFpAIntBINS0_11threadblock14DqMmaPipelinedINS0_9GemmShapeILi32ELi128ELi64EEENS_9transform11threadblock22PredicatedTileIteratorINS_11MatrixShapeILi32ELi64EEENS_10bfloat16_tENS_6layout8RowMajorELi1ENS7_29PitchLinearWarpRakedThreadMapINS_16PitchLinearShapeILi64ELi32EEELi128ENSG_ILi4ELi8EEELi8EEELi8ELb0ENSD_9NoPermuteEEENS8_19RegularTileIteratorISB_NS_6half_tENSD_42RowMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi0ESJ_Li16EEENS9_INSA_ILi64ELi128EEEhNSD_11ColumnMajorELi0ENSF_INSG_ILi64ELi128EEELi128ESI_Li8EEELi8ELb0ESK_EENSM_ISR_SN_NSD_45ColumnMajorVoltaTensorOpMultiplicandCrosswiseILi16ELi64EEELi1ESU_Li16EEENS9_INSA_ILi1ELi128EEESC_SE_Li0ENS7_30PitchLinearStripminedThreadMapINSG_ILi128ELi1EEELi16ELi8EEELi8ELb0ESK_EENS9_ISZ_SN_SE_Li0ES12_Li8ELb0ESK_EEfSE_NS3_9MmaPolicyINS0_4warp16MmaVoltaTensorOpINS5_ILi32ELi32ELi64EEESN_SP_SN_SX_fSE_NS16_17MmaTensorOpPolicyINS_4arch3MmaINS5_ILi16ELi16ELi4EEELi32ESN_SE_SN_SS_fSE_NS1A_13OpMultiplyAddEEENSA_ILi1ELi1EEEEEbEENSA_ILi0ELi0EEES1I_Li1EEENS_45FastInterleavedAndBiasedNumericArrayConverterISN_hLi64EEENS_21NumericArrayConverterISN_SN_Li8ELNS_15FloatRoundStyleE2ENS7_6thread14UnaryTransform8IdentityEEELNS_17WeightOnlyQuantOpE1EbEENS_8epilogue11threadblock8EpilogueIS6_S1H_Li1ENS1V_22PredicatedTileIteratorINS1V_26OutputTileOptimalThreadMapINS1V_15OutputTileShapeILi128ELi4ELi4ELi1ELi1EEENS1Z_ILi1ELi2ELi1ELi1ELi2EEELi128ELi8ELi16EEESC_Lb0ESK_Lb0EEENS1U_4warp29FragmentIteratorVoltaTensorOpIS18_NS5_ILi32ELi32ELi4EEEfSE_EENS24_25TileIteratorVoltaTensorOpIS18_S26_fSE_EENS1V_18SharedLoadIteratorINS22_18CompactedThreadMapEfLi8EEENS1U_6thread17LinearCombinationISC_Li8EffLNS2D_9ScaleType4KindE0ELS1N_2ESC_EENSA_ILi0ELi2EEELi1ELi1EEENS3_30GemmIdentityThreadblockSwizzleILi1EEENS1A_4Sm70ELb1EE6ParamsE))cutlass::Kernel< ::cutlass::gemm::kernel::GemmFpAIntB< ::cutlass::gemm::threadblock::DqMmaPipelined< ::cutlass::gemm::GemmShape<(int)32, (int)128, (int)64> , ::cutlass::transform::threadblock::PredicatedTileIterator< ::cutlass::MatrixShape<(int)32, (int)64> , ::cutlass::bfloat16_t, ::cutlass::layout::RowMajor, (int)1, ::cutlass::transform::PitchLinearWarpRakedThreadMap< ::cutlass::PitchLinearShape<(int)64, (int)32> , (int)128, ::cutlass::PitchLinearShape<(int)4, (int)8> , (int)8> , (int)8, (bool)0, ::cutlass::layout::NoPermute> , ::cutlass::transform::threadblock::RegularTileIterator< ::cutlass::MatrixShape<(int)32, (int)64> , ::cutlass::half_t, ::cutlass::layout::RowMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , (int)0, ::cutlass::transform::PitchLinearWarpRakedThreadMap< ::cutlass::PitchLinearShape<(int)64, (int)32> , (int)128, ::cutlass::PitchLinearShape<(int)4, (int)8> , (int)8> , (int)16> , ::cutlass::transform::threadblock::PredicatedTileIterator< ::cutlass::MatrixShape<(int)64, (int)128> , unsigned char, ::cutlass::layout::ColumnMajor, (int)0, ::cutlass::transform::PitchLinearWarpRakedThreadMap< ::cutlass::PitchLinearShape<(int)64, (int)128> , (int)128, ::cutlass::PitchLinearShape<(int)4, (int)8> , (int)8> , (int)8, (bool)0, ::cutlass::layout::NoPermute> , ::cutlass::transform::threadblock::RegularTileIterator< ::cutlass::MatrixShape<(int)64, (int)128> , ::cutlass::half_t, ::cutlass::layout::ColumnMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , (int)1, ::cutlass::transform::PitchLinearWarpRakedThreadMap< ::cutlass::PitchLinearShape<(int)64, (int)128> , (int)128, ::cutlass::PitchLinearShape<(int)4, (int)8> , (int)8> , (int)16> , ::cutlass::transform::threadblock::PredicatedTileIterator< ::cutlass::MatrixShape<(int)1, (int)128> , ::cutlass::bfloat16_t, ::cutlass::layout::RowMajor, (int)0, ::cutlass::transform::PitchLinearStripminedThreadMap< ::cutlass::PitchLinearShape<(int)128, (int)1> , (int)16, (int)8> , (int)8, (bool)0, ::cutlass::layout::NoPermute> , ::cutlass::transform::threadblock::PredicatedTileIterator< ::cutlass::MatrixShape<(int)1, (int)128> , ::cutlass::half_t, ::cutlass::layout::RowMajor, (int)0, ::cutlass::transform::PitchLinearStripminedThreadMap< ::cutlass::PitchLinearShape<(int)128, (int)1> , (int)16, (int)8> , (int)8, (bool)0, ::cutlass::layout::NoPermute> , float, ::cutlass::layout::RowMajor, ::cutlass::gemm::threadblock::MmaPolicy< ::cutlass::gemm::warp::MmaVoltaTensorOp< ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)64> , ::cutlass::half_t, ::cutlass::layout::RowMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , ::cutlass::half_t, ::cutlass::layout::ColumnMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , float, ::cutlass::layout::RowMajor, ::cutlass::gemm::warp::MmaTensorOpPolicy< ::cutlass::arch::Mma< ::cutlass::gemm::GemmShape<(int)16, (int)16, (int)4> , (int)32, ::cutlass::half_t, ::cutlass::layout::RowMajor, ::cutlass::half_t, ::cutlass::layout::ColumnMajor, float, ::cutlass::layout::RowMajor, ::cutlass::arch::OpMultiplyAdd> , ::cutlass::MatrixShape<(int)1, (int)1> > , bool> , ::cutlass::MatrixShape<(int)0, (int)0> , ::cutlass::MatrixShape<(int)0, (int)0> , (int)1> , ::cutlass::FastInterleavedAndBiasedNumericArrayConverter< ::cutlass::half_t, unsigned char, (int)64> , ::cutlass::NumericArrayConverter< ::cutlass::half_t, ::cutlass::half_t, (int)8, ( ::cutlass::FloatRoundStyle)2, ::cutlass::transform::thread::UnaryTransform::Identity> , ( ::cutlass::WeightOnlyQuantOp)1, bool> , ::cutlass::epilogue::threadblock::Epilogue< ::cutlass::gemm::GemmShape<(int)32, (int)128, (int)64> , ::cutlass::gemm::warp::MmaVoltaTensorOp< ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)64> , ::cutlass::half_t, ::cutlass::layout::RowMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , ::cutlass::half_t, ::cutlass::layout::ColumnMajorVoltaTensorOpMultiplicandCrosswise<(int)16, (int)64> , float, ::cutlass::layout::RowMajor, ::cutlass::gemm::warp::MmaTensorOpPolicy< ::cutlass::arch::Mma< ::cutlass::gemm::GemmShape<(int)16, (int)16, (int)4> , (int)32, ::cutlass::half_t, ::cutlass::layout::RowMajor, ::cutlass::half_t, ::cutlass::layout::ColumnMajor, float, ::cutlass::layout::RowMajor, ::cutlass::arch::OpMultiplyAdd> , ::cutlass::MatrixShape<(int)1, (int)1> > , bool> , (int)1, ::cutlass::epilogue::threadblock::PredicatedTileIterator< ::cutlass::epilogue::threadblock::OutputTileOptimalThreadMap< ::cutlass::epilogue::threadblock::OutputTileShape<(int)128, (int)4, (int)4, (int)1, (int)1> , ::cutlass::epilogue::threadblock::OutputTileShape<(int)1, (int)2, (int)1, (int)1, (int)2> , (int)128, (int)8, (int)16> , ::cutlass::bfloat16_t, (bool)0, ::cutlass::layout::NoPermute, (bool)0> , ::cutlass::epilogue::warp::FragmentIteratorVoltaTensorOp< ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)64> , ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)4> , float, ::cutlass::layout::RowMajor> , ::cutlass::epilogue::warp::TileIteratorVoltaTensorOp< ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)64> , ::cutlass::gemm::GemmShape<(int)32, (int)32, (int)4> , float, ::cutlass::layout::RowMajor> , ::cutlass::epilogue::threadblock::SharedLoadIterator< ::cutlass::epilogue::threadblock::OutputTileOptimalThreadMap< ::cutlass::epilogue::threadblock::OutputTileShape<(int)128, (int)4, (int)4, (int)1, (int)1> , ::cutlass::epilogue::threadblock::OutputTileShape<(int)1, (int)2, (int)1, (int)1, (int)2> , (int)128, (int)8, (int)16> ::CompactedThreadMap, float, (int)8> , ::cutlass::epilogue::thread::LinearCombination< ::cutlass::bfloat16_t, (int)8, float, float, ( ::cutlass::epilogue::thread::ScaleType::Kind)0, ( ::cutlass::FloatRoundStyle)2, ::cutlass::bfloat16_t> , ::cutlass::MatrixShape<(int)0, (int)2> , (int)1, (int)1> , ::cutlass::gemm::threadblock::GemmIdentityThreadblockSwizzle<(int)1> , ::cutlass::arch::Sm70, (bool)1> > )));}namespace cutlass{ 194.2 | 194.3 0xe34ddb internal_error(char const, ...) 194.3 ???:0 194.3 0x13ac413 ggc_set_mark(void const) 194.3 ???:0 194.3 0x13a969d gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13b7202 gt_ggc_mx_function(void) 194.3 ???:0 194.3 0x13aae54 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13b438d gt_ggc_mx_vec_tree_vagc(void) 194.3 ???:0 194.3 0x13b4872 gt_ggc_mx_lang_type(void) 194.3 ???:0 194.3 0x13aac54 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13aafa7 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13ab082 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13aae5e gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13a9f36 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13a9ce4 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13a9bed gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13b4638 gt_ggc_mx_lang_decl(void) 194.3 ???:0 194.3 0x13aafbb gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13b438d gt_ggc_mx_vec_tree_vagc(void) 194.3 ???:0 194.3 0x13b4872 gt_ggc_mx_lang_type(void) 194.3 ???:0 194.3 0x13aac54 gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 0x13aa4bd gt_ggc_mx_lang_tree_node(void) 194.3 ???:0 194.3 Please submit a full bug report, 194.3 with preprocessed source if appropriate. 194.3 Please include the complete backtrace with any bug report. 194.3 See file:///usr/share/doc/gcc-11/README.Bugs for instructions. 194.4 gmake[3]: [tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/build.make:10534: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/cutlass_kernels/fpA_intB_gemm/bf16_int8_gemm_per_col.cu.o] Error 1 427.2 Non-atomic load cannot have SynchronizationScope specified 427.2 %tmp945 = load i32, i32* %warp, align 4, !dbg !130827 436.8 nvcc error : 'cicc' died due to signal 11 (Invalid memory reference) 436.8 nvcc error : 'cicc' core dumped 436.9 gmake[3]: [tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/build.make:10814: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention112_float.cu.o] Error 139 793.1 free(): invalid pointer 793.3 nvcc error : 'cicc' died due to signal 6 793.3 nvcc error : 'cicc' core dumped 793.4 gmake[3]: [tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/build.make:10856: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttention128_bf16_implicit_relative_attn.cu.o] Error 134 981.6 gmake[2]: [CMakeFiles/Makefile2:816: tensorrt_llm/kernels/CMakeFiles/kernels_src.dir/all] Error 2 981.6 gmake[1]: [CMakeFiles/Makefile2:771: tensorrt_llm/CMakeFiles/tensorrt_llm.dir/rule] Error 2 981.6 gmake: * [Makefile:192: tensorrt_llm] Error 2 981.6 Traceback (most recent call last): 981.6 File "/app/tensorrt_llm/scripts/build_wheel.py", line 324, in 981.6 main(vars(args)) 981.6 File "/app/tensorrt_llm/scripts/build_wheel.py", line 166, in main 981.6 build_run( 981.6 File "/usr/lib/python3.10/subprocess.py", line 526, in run 981.6 raise CalledProcessError(retcode, process.args, 981.6 subprocess.CalledProcessError: Command 'cmake --build . --config Release --parallel 32 --target tensorrt_llm nvinfer_plugin_tensorrt_llm th_common bindings ' returned non-zero exit status 2.

Dockerfile.trt_llm_backend:48

46 | COPY scripts scripts 47 | COPY tensorrt_llm tensorrt_llm 48 | >>> RUN cd tensorrt_llm && python3 scripts/build_wheel.py --trt_root="${TRT_ROOT}" -i -c && cd .. 49 |
50 | FROM trt_llm_builder as trt_llm_backend_builder

ERROR: failed to solve: process "/bin/sh -c cd tensorrt_llm && python3 scripts/build_wheel.py --trt_root=\"${TRT_ROOT}\" -i -c && cd .." did not complete successfully: exit code: 1

additional notes

I can build the docker image on an A100 VM. Due to this fact, I am assuming that might either be a Docker issue, a CPU issue, or something else hardware-related. Guidance would be appreciated if a CPU from 2020 is too old. If that is the case documentation on that fact would be great.

I am able to build the tensorrt-llm docker image (not the backend) using the following on the other repo:


FROM nvidia/cuda:12.1.0-devel-ubuntu22.04

RUN apt update \
&& apt upgrade -y

RUN apt-get update && apt-get -y install python3.10 python3-pip openmpi-bin libopenmpi-dev

RUN apt install git -y

RUN pip3 install tensorrt_llm==0.8.0 -U --extra-index-url https://pypi.nvidia.com

RUN apt install git-lfs \
&& apt install zsh -y \
&& apt install wget -y \
&& apt install curl -y

WORKDIR /workspace
htdung167 commented 1 month ago

@mallorbc I met the same error. How did you solve it?

mallorbc commented 1 month ago

@htdung167 Just use a prebuilt image instead.