facebookresearch / xformers

Hackable and optimized Transformers building blocks, supporting a composable construction.
https://facebookresearch.github.io/xformers/
Other
8.41k stars 597 forks source link

compile for rocm w/ gfx1032 card #1110

Open brcisna opened 6 days ago

brcisna commented 6 days ago

❓ Questions and Help

Hi All,

Debian 13 python3.10.12 venv PyTorch2.4.1_rocm

When I try and compile xformers against Pytorch2.4.1_rocm I am ending up with the common "no file found at /thrust/complex.h BUT,,,this may have something to do with issue https://github.com/facebookresearch/xformers/issues/1026

If I install the precompiled xformers_rocm This is what the xformers info looks like

python -m xformers.info WARNING[XFORMERS]: Need to compile C++ extensions to use all xFormers features. Please install xformers properly (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details xFormers 0.0.0 memory_efficient_attention.ckF: unavailable memory_efficient_attention.ckB: unavailable memory_efficient_attention.ck_decoderF: unavailable memory_efficient_attention.ck_splitKF: unavailable memory_efficient_attention.cutlassF: unavailable memory_efficient_attention.cutlassB: unavailable memory_efficient_attention.fa2F@0.0.0: unavailable memory_efficient_attention.fa2B@0.0.0: unavailable memory_efficient_attention.fa3F@0.0.0: unavailable memory_efficient_attention.fa3B@0.0.0: unavailable memory_efficient_attention.triton_splitKF: available indexing.scaled_index_addF: available indexing.scaled_index_addB: available indexing.index_select: available sequence_parallel_fused.write_values: unavailable sequence_parallel_fused.wait_values: unavailable sequence_parallel_fused.cuda_memset_32b_async: unavailable sp24.sparse24_sparsify_both_ways: unavailable sp24.sparse24_apply: unavailable sp24.sparse24_apply_dense_output: unavailable sp24._sparse24_gemm: unavailable sp24._cslt_sparse_mm@0.0.0: available swiglu.dual_gemm_silu: unavailable swiglu.gemm_fused_operand_sum: unavailable swiglu.fused.p.cpp: not built is_triton_available: True pytorch.version: 2.4.1+rocm6.1 pytorch.cuda: available gpu.compute_capability: 10.3 gpu.name: AMD Radeon Pro W6600 dcgm_profiler: unavailable build.info: none source.privacy: open source

AM trying to compile with an AMD Radeon Pro W6600 gfw1032,,,,which,,,as of May 2024 is still not supported.

Realize this is very experimental as it is.

TIA

lw commented 5 days ago

Please post the entire error log, ideally as text. Otherwise we cannot help.

Also, please make sure you cloned all the submodules. Run git submodule update --init --recursive if you're not sure.

brcisna commented 5 days ago

This doing a build in a Python3.10.12 venv 'python setup.py build '

Yes.. I did do " git submodule update --init --recursive"

It appears all fails after the building process does not find the /thrust/complex.h file as evidenced in the error log. Installed PyTorch with pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1

Error log

`/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/include/c10/util/complex.h:8:10: fatal error: 'thrust/complex.h' file not found 8 | #include <thrust/complex.h> | ^~~~~~ 1 error generated when compiling for gfx1032.

,,,,continued,,,,

/home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1118:16: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::MakeXTLdsBlockDescriptor<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>, 32, 32, 4, 2>' requested here 1118 | return MakeXTLdsBlockDescriptor<Problem, kNPerBlock, kKPerBlock, kKPack, kKPackT>(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1606:13: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::MakeShuffledQLdsWriteBlockDescriptor<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 1606 | MakeShuffledQLdsWriteBlockDescriptor().get_element_space_size(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1700:44: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::GetSmemSizeQT<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 1700 | constexpr index_t smem_size_qt = GetSmemSizeQT(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_dq_dk_dv_pipeline_kr_ktr_vr_iglp_hip.hpp:83:33: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::GetSmemSize<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 83 | return Policy::template GetSmemSize(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/kernel/fmha_bwd_kernel_hip.hpp:580:43: note: in instantiation of member function 'ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>::GetSmemSize' requested here 580 | return ck_tile::max(FmhaPipeline::GetSmemSize(), | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/kernel/fmha_bwd_kernel_hip.hpp:588:34: note: (skipping 1 context in backtrace; use -ftemplate-backtrace-limit=0 to see all) 588 | shared char smem_ptr[GetSmemSize()]; | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/host/kernel_launch_hip.hpp:21:5: note: in instantiation of member function 'ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>::operator()' requested here 21 | Kernel{}(args...); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/host/kernel_launch_hip.hpp:38:25: note: in instantiation of function template specialization 'ck_tile::kentry<128, 1, ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>, ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>::FmhaBwdGroupModeKargs>' requested here 38 | const auto kernel = kentry<MaxThreadPerBlock, MinBlockPerCu, KernelImpl, Args...>; | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_hip.h:161:11: note: in instantiation of function template specialization 'grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>::RunWithBwdDQDKDVKernel<ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, Float16, true, false>>>>' requested here 161 | RunWithBwdDQDKDVKernel<FmhaBwdDQDKDVKernel>(param, stream); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_hip.h:357:14: note: in instantiation of member function 'grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>::Run' requested here 357 | MaxK>::Run(param, stream); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_fp16.hip:30:15: note: in instantiation of function template specialization 'run_grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>' requested here 30 | run_grouped_backward_causalmask_bias_dropout_dispatch< | ^ In file included from /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_fp16.hip:12: In file included from /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_hip.h:13: In file included from /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha_hip.hpp:22: In file included from /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_convert_dq_hip.hpp:8: /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:734:31: error: division by zero is undefined [-Werror,-Wdivision-by-zero] 734 | ? KThreadRead / (kfold * K0PerThreadWrite / K0PerThreadRead) | ^ ~~~~~~~~~~~~ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:992:16: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::MakeXTLdsBlockDescriptor<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>, 32, 128, 8, 4>' requested here 992 | return MakeXTLdsBlockDescriptor<Problem, kNPerBlock, kKPerBlock, kKPack, kKPackT>(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1001:42: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::MakeShuffledKLdsWriteBlockDescriptor<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 1001 | auto shuffled_k_lds_block_desc = MakeShuffledKLdsWriteBlockDescriptor(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1625:13: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::MakeKTLdsReadBlockDescriptor<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 1625 | MakeKTLdsReadBlockDescriptor().get_element_space_size(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_pipeline_default_policy_hip.hpp:1703:44: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::GetSmemSizeKT<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 1703 | constexpr index_t smem_size_kt = GetSmemSizeKT(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/pipeline/block_fmha_bwd_dq_dk_dv_pipeline_kr_ktr_vr_iglp_hip.hpp:83:33: note: in instantiation of function template specialization 'ck_tile::BlockFmhaBwdPipelineDefaultPolicy::GetSmemSize<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>' requested here 83 | return Policy::template GetSmemSize(); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/ops/fmha/kernel/fmha_bwd_kernel_hip.hpp:580:43: note: (skipping 2 contexts in backtrace; use -ftemplate-backtrace-limit=0 to see all) 580 | return ck_tile::max(FmhaPipeline::GetSmemSize(), | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/host/kernel_launch_hip.hpp:21:5: note: in instantiation of member function 'ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>::operator()' requested here 21 | Kernel{}(args...); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/third_party/composable_kernel_tiled/include/ck_tile/host/kernel_launch_hip.hpp:38:25: note: in instantiation of function template specialization 'ck_tile::kentry<128, 1, ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>, ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>>::FmhaBwdGroupModeKargs>' requested here 38 | const auto kernel = kentry<MaxThreadPerBlock, MinBlockPerCu, KernelImpl, Args...>; | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_hip.h:161:11: note: in instantiation of function template specialization 'grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>::RunWithBwdDQDKDVKernel<ck_tile::FmhaBwdDQDKDVKernel<ck_tile::BlockFmhaBwdDQDKDVPipelineKRKTRVRIGLP<ck_tile::BlockFmhaBwdPipelineProblem<_Float16, _Float16, _Float16, _Float16, float, float, float, _Float16, unsigned short, _Float16, _Float16, _Float16, _Float16, _Float16, _Float16, FmhaBwdShape<32>, true, false, ck_tile::SimplifiedGenericAttentionMask<>, ck_tile::BlockDropoutBwd<false, true, false>, ck_tile::TileFmhaTraits<true, true, false, false, ck_tile::BlockAttentionBiasEnum::NO_BIAS, true, false, false, false, 1>>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, _Float16, true, false>>, ck_tile::Default2DEpilogue<ck_tile::Default2DEpilogueProblem<float, Float16, true, false>>>>' requested here 161 | RunWithBwdDQDKDVKernel<FmhaBwdDQDKDVKernel>(param, stream); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_hip.h:357:14: note: in instantiation of member function 'grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>::Run' requested here 357 | MaxK>::Run(param, stream); | ^ /home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/xformers/csrc/attention/hip_fmha/ck_tiled_fmha_grouped_backward_fp16.hip:30:15: note: in instantiation of function template specialization 'run_grouped_backward_causalmask_bias_dropout_dispatch<_Float16, false, false, true, false, 32>' requested here 30 | run_grouped_backward_causalmask_bias_dropout_dispatch< | ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated when compiling for gfx1032. ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2105, in _run_ninja_build subprocess.run( File "/home/superuser/.pyenv/versions/3.10.12/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/setup.py", line 584, in setuptools.setup( File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/init.py", line 87, in setup return distutils.core.setup(**attrs) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup return run_commands(dist) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands dist.run_commands() File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands self.run_command(cmd) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command super().run_command(command) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command cmd_obj.run() File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/command/build.py", line 132, in run self.run_command(cmd_name) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/cmd.py", line 319, in run_command self.distribution.run_command(command) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command super().run_command(command) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command cmd_obj.run() File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 84, in run _build_ext.run(self) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run self.build_extensions() File "/home/superuser/MyPrograms/wunjo/wunjo/portable/xformers/setup.py", line 541, in build_extensions super().build_extensions() File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 866, in build_extensions build_ext.build_extensions(self) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 466, in build_extensions self._build_extensions_serial() File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 492, in _build_extensions_serial self.build_extension(ext) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/command/build_ext.py", line 246, in build_extension _build_ext.build_extension(self, ext) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/Cython/Distutils/build_ext.py", line 135, in build_extension super(build_ext, self).build_extension(ext) File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/setuptools/_distutils/command/build_ext.py", line 547, in build_extension objects = self.compiler.compile( File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 679, in unix_wrap_ninja_compile _write_ninja_file_and_compile_objects( File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1785, in _write_ninja_file_and_compile_objects _run_ninja_build( File "/home/superuser/.pyenv/versions/wunjo/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 2121, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error compiling objects for extension'

TIA

lw commented 5 days ago

That seems to be an issue in upstream PyTorch: https://github.com/pytorch/pytorch/issues/72918

They claim that, for NVIDIA, it is due to a botched system setup, and installing the right packages should fix it. Could you verify your installation?

I don't know if this is also the case for AMD. Someone commented on that issue but no one answered. Could you try commenting there too or open a new issue on PyTorch?

brcisna commented 5 days ago

@lw

Thank You!! Very much! I will post the same error log there at PyTorch forums/issues and report back here. Am very green at this stuff..it goes without saying.

I think if i get xformers to compile correctly,,it may eliminate the float32 error i get when launching wunjo. BTW,, You should check out' wunjo AI V2' .. He doesnt get much ink,,,but is a very cool AI app.

Thanks again.