wangzhaode / mnn-llm

llm deploy project based mnn.
Apache License 2.0
1.47k stars 163 forks source link

windows 平台 "cmake --build . -- /m:8" 编译 mnn 时报错 #64

Closed Zaaachary closed 1 year ago

Zaaachary commented 1 year ago

请开发者dalao帮忙看看怎么回事咧。

已安装 cmake、cuda

已执行 cmake -DCMAKE_BUILD_TYPE=Release -DMNN_CUDA=ON ..

而执行 cmake --build . -- /m:8的时候报错了

编译过程中出现两个 warning image

最后的报错信息如下

  ShapeUnpack.cpp
  ShapeUnravelIndex.cpp
  ShapeWhere.cpp
  SizeComputer.cpp
  正在生成代码...
  MNNTransform.vcxproj -> D:\Project\LLM\ChatGLM\MNN\build\MNNTransform.dir\Debug\MNNTransform.lib
  Building NVCC (Device) object source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/Debug/MNN_CUDA_generated_ArgMaxEx
  ecution.cu.obj
  ArgMaxExecution.cu
  cl: 鍛戒护琛?warning D9025 :姝e湪閲嶅啓鈥?O2鈥?鐢ㄢ€?Od鈥?
  cl: 鍛戒护琛?warning D9025 :姝e湪閲嶅啓鈥?Od鈥?鐢ㄢ€?O2鈥?
  cl: 鍛戒护琛?error D8016 :鈥?RTC1鈥濆拰鈥?O2鈥濆懡浠よ閫夐」涓嶅吋瀹?
  CMake Error at MNN_CUDA_generated_ArgMaxExecution.cu.obj.Debug.cmake:220 (message):
    Error generating
    D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/Debug/MNN_CUDA_generated_Arg
  MaxExecution.cu.obj

D:\ProgramFiles\Develop\VisualStudio\Program\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(247,5): error MSB806
6: “D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\ArgMaxExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backe
nd\cuda\execution\ArgMinExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\BinaryExecution.cu;D:\Pro
ject\LLM\ChatGLM\MNN\source\backend\cuda\execution\ConvBaseKernel.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\exe
cution\ConvCutlassExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\ConvDepthWiseExecution.cu;D:\Pr
oject\LLM\ChatGLM\MNN\source\backend\cuda\execution\ConvSingleInputExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backe
nd\cuda\execution\ConvWinogradExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\DeconvBaseKernel.cu
;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\DeconvSingleInputExecution.cu;D:\Project\LLM\ChatGLM\MNN\sour
ce\backend\cuda\execution\GatherV2Execution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\GridSampleExecu
tion.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\InterpExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\b
ackend\cuda\execution\LayerNormExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\LoopExecution.cu;D
:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\MatMulExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cu
da\execution\MultiInputConvExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\MultiInputDeconvExecut
ion.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\PReLUExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\bac
kend\cuda\execution\PoolExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\RangeExecution.cu;D:\Proj
ect\LLM\ChatGLM\MNN\source\backend\cuda\execution\Raster.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\Re
ductionExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\ScaleExecution.cu;D:\Project\LLM\ChatGLM\M
NN\source\backend\cuda\execution\SelectExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\SoftmaxExe
cution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\Transpose.cu;D:\Project\LLM\ChatGLM\MNN\source\backe
nd\cuda\execution\UnaryExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassConvCommonE
xecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassDeconvCommonExecution.cu;D:\Project
\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassGemmCUDACoreFloat16.cu;D:\Project\LLM\ChatGLM\MNN\source\
backend\cuda\execution\cutlass\CutlassGemmCUDACoreFloat16Deconv.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execu
tion\cutlass\CutlassGemmCUDACoreFloat32.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassGemm
CUDACoreFloat32Decov.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassGemmTensorCore.cu;D:\Pr
oject\LLM\ChatGLM\MNN\source\backend\cuda\execution\cutlass\CutlassGemmTensorCore884.cu;D:\Project\LLM\ChatGLM\MNN\sour
ce\backend\cuda\execution\cutlass\CutlassGemmTensorCoreDeconv.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\executi
on\int8\ConvInt8CutlassExecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\int8\CutlassGemmInt8Tensor
Core.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\int8\CutlassGemmInt8TensorCore16832.cu;D:\Project\LLM\
ChatGLM\MNN\source\backend\cuda\execution\int8\DepthwiseConvInt8Execution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\
cuda\execution\int8\FloatToInt8Execution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\execution\int8\Int8ToFloatEx
ecution.cu;D:\Project\LLM\ChatGLM\MNN\source\backend\cuda\CMakeLists.txt”的自定义生成已退出,代码为 1。 [D:\Project\LLM\ChatGLM\MNN\b
uild\source\backend\cuda\MNN_CUDA.vcxproj]
Zaaachary commented 1 year ago

改成用ninja了,还是报错 呜呜呜

看了mnn的文档,加了cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DMNN_WIN_RUNTIME_MT=ON -DMNN_BUILD_SHARED_LIBS=ON -DMNN_CUDA=ON

PS D:\Project\LLM\ChatGLM\MNN\build> ninja
[259/375] Building NVCC (Device) object source/backend/cud...DA.dir/execution/MNN_CUDA_generated_ArgMaxExecution.cu.obj
FAILED: source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/MNN_CUDA_generated_ArgMaxExecution.cu.obj D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/MNN_CUDA_generated_ArgMaxExecution.cu.obj
cmd.exe /C "cd /D D:\Project\LLM\ChatGLM\MNN\build\source\backend\cuda\CMakeFiles\MNN_CUDA.dir\execution && D:\ProgramFiles\Develop\cmake\bin\cmake.exe -E make_directory D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/. && D:\ProgramFiles\Develop\cmake\bin\cmake.exe -D verbose:BOOL=OFF -D build_configuration:STRING=Release -D generated_file:STRING=D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/./MNN_CUDA_generated_ArgMaxExecution.cu.obj -D generated_cubin_file:STRING=D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/./MNN_CUDA_generated_ArgMaxExecution.cu.obj.cubin.txt -P D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/MNN_CUDA_generated_ArgMaxExecution.cu.obj.Release.cmake"
nvcc fatal   : Host compiler targets unsupported OS.
CMake Error at MNN_CUDA_generated_ArgMaxExecution.cu.obj.Release.cmake:220 (message):
  Error generating
  D:/Project/LLM/ChatGLM/MNN/build/source/backend/cuda/CMakeFiles/MNN_CUDA.dir/execution/./MNN_CUDA_generated_ArgMaxExecution.cu.obj

看起来是这个问题nvcc fatal : Host compiler targets unsupported OS.

Tlntin commented 1 year ago

windows编译,不用cuda可以编译成功(还需要改全局utf-8编码),用了就会失败。

dthcle commented 1 year ago

windows编译CUDA报错参考 {MNN_ROOT}\docs\compile\engine.md 里边有说明

Zaaachary commented 1 year ago

问题解决,我降低了cuda的版本。12->11.7/11.8.

kxs2018 commented 1 year ago

早点来看这个issue就好了,不用浪费一下午时间

Pangu-Immortal commented 12 months ago

cmake -DCMAKE_BUILD_TYPE=Release ..

cmake -DCMAKE_BUILD_TYPE=Release -DMNN_CUDA=ON ..

上面 无论是否开启CUDA ,都报一样的错误。

cmake --build . -- /m:8

make: *** No rule to make target '/m:8'. Stop.

window11,有谁知道为什么嘛???