SHI-Labs / NATTEN

Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
https://shi-labs.com/natten/
Other
346 stars 27 forks source link

cant build it from source for Windows #18

Closed neurosynapse closed 6 months ago

neurosynapse commented 1 year ago

Hello,

I have been trying to install natten on Windows from source. Unfortunately unsuccessful. I have tried lot of different approaches. It would be nice if you could help me in this. Here is the error I m getting.

image

Best regards, Roberts

alihassanijr commented 1 year ago

Hello and thank you for your interest.

Unfortunately we don't have the ability to test on Windows at this time, and as a result have been unable to support it. That said we'd be happy to look at your logs and help figure out the issue. Could you try and build again and redirect the output and errors into a file, and share that file? (Our Makefile's install target already does this, but you probably can't do make in command prompt, so I guess you would do something like pip install -e . > natten-out.txt 2>&1 ?)

Based on what we've seen so far the issue is that Windows builds require additional headers in the CPP backend. What we could do is branch out, push temporary solutions and see if those work for you. If they do, we can merge that into master so we can at least support building from source on Windows.

A different solution is to use docker. It may sound more complicated than it actually is, but setting up Docker with WSL is super easy from what I've heard, but you'd need to read this to make sure you configure it correctly so that GPUs are recognized. Once you do that, you could just start a container based off of nvidia/cuda, pytorch, or NGC (highly recommended for training), and install NATTEN's linux version with wheels.

neurosynapse commented 1 year ago

Hello Ali,

Additional branch would be really nice. Please find my log output attached.

natten-out.txt

Best regards, Roberts

alihassanijr commented 1 year ago

Thanks for sharing this. I'm still going through the output you shared, but one thing that jumps out at me is that the errors seem to be focused on the dispatchers more than the underlying kernels.

I think pybind could be part of the issue, if not the whole issue. I pushed a temporary fix that I borrowed from xformers https://github.com/alihassanijr/NATTEN-Torch/tree/feature/windows. Can you clone my fork's feature/windows branch and try installing there?

rm -rf NATTEN # remove your clone of the main repository
git clone -b feature/windows https://github.com/alihassanijr/NATTEN-Torch NATTEN
cd NATTEN/
neurosynapse commented 1 year ago

Hi Ali,

Nope, seems no changes. I added the output. natten-out.txt

Best regards, Roberts

alihassanijr commented 1 year ago

Could you share your pytorch version?

python3 -c "import torch; print(torch.__version__); print(torch.__config__.show())"
neurosynapse commented 1 year ago

of course:

(oneformer) C:\Users\rob\git_repos_download\Semantic_segmentation\OneFormer\NATTEN-Torch>python Python 3.8.16 (default, Jan 17 2023, 22:25:28) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information.

import torch print(torch.version) 1.10.1 print(torch.config.show()) PyTorch built with:

  • C++ Version: 199711
  • MSVC 192829337
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  • OpenMP 2019
  • LAPACK is enabled (usually provided by MKL)
  • CPU capability usage: AVX2
  • CUDA Runtime 11.3
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.2
  • Magma 2.5.4
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,

Best regards, Roberts

alihassanijr commented 1 year ago

Let me get back to you on this. From what I'm seeing in the logs you provided it's, for the most part, failing on the dispatchers, and it suggest those are syntax errors, which doesn't really make that much sense (at least I've rarely come across that issue.)

We definitely want to investigate this issue further and resolve it in our future releases, but if you happen to be in a rush to get started, I think Docker might be worth exploring -- just as a temporary solution. (Just saying because I think it may take a few days, maybe even weeks, to figure this out.)

alihassanijr commented 1 year ago

Actually can you try installing pywin32 and cython (which should check if they're already installed):

pip install pywin32 cython

https://pypi.org/project/pywin32/

neurosynapse commented 1 year ago

Hi Ali,

Unfortunately my CPU doesnt seem to support Hyper-V which is needed to run docker on Windows. Of course having a solution in a matter of days would be nice, however, if its not possible it would be nice to have it after a while as well. I would be ready to support you with any information you need guys.

I will test the package you mentioned and let you know on the results.

Best regards, Roberts

otrd., 2023. g. 7. febr., plkst. 20:26 — lietotājs Ali Hassani (< @.***>) rakstīja:

Let me get back to you on this. From what I'm seeing in the logs you provided it's, for the most part, failing on the dispatchers, and it suggest those are syntax errors, which doesn't really make that much sense (at least I've rarely come across that issue.)

We definitely want to investigate this issue further and resolve it in our future releases, but if you happen to be in a rush to get started, I think Docker might be worth exploring -- just as a temporary solution. (Just saying because I think it may take a few days, maybe even weeks, to figure this out.)

— Reply to this email directly, view it on GitHub https://github.com/SHI-Labs/NATTEN/issues/18#issuecomment-1421255373, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALI457K5TMAYQPBITGRNKLDWWKHU5ANCNFSM6AAAAAAUR4WDV4 . You are receiving this because you authored the thread.Message ID: @.***>

neurosynapse commented 1 year ago

Hi Ali,

Both packages are installed already.

(oneformer) C:\Users\rob\git_repos_download\Semantic_segmentation\OneFormer\NATTEN-Torch> (oneformer) C:\Users\rob\git_repos_download\Semantic_segmentation\OneFormer\NATTEN-Torch>pip install pywin32 cython Requirement already satisfied: pywin32 in c:\users\rob\anaconda3\envs\oneformer\lib\site-packages (305) Requirement already satisfied: cython in c:\users\rob\anaconda3\envs\oneformer\lib\site-packages (0.29.33)

Best regards, Roberts

otrd., 2023. g. 7. febr., plkst. 21:36 — lietotājs Ali Hassani (< @.***>) rakstīja:

Actually can you try installing pywin32 and cython (which should check if they're already installed):

pip install pywin32 cython

https://pypi.org/project/pywin32/

— Reply to this email directly, view it on GitHub https://github.com/SHI-Labs/NATTEN/issues/18#issuecomment-1421339761, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALI457MZQC7WRAMXQRDL7XDWWKP5PANCNFSM6AAAAAAUR4WDV4 . You are receiving this because you authored the thread.Message ID: @.***>

alihassanijr commented 1 year ago

Actually I noticed your environment is named oneformer. Do you happen to be using detectron2? If yes, did you build that from source as well, or did you use wheels?

smpark64 commented 1 year ago

I have the same issue on Windows. Do you have any update?

alihassanijr commented 1 year ago

Sorry, unfortunately we've been unable to dig any deeper since we don't have a Windows machine, and haven't really heard back from the OP on this. If you're willing to share logs with us @smpark64 , we'd be happy to look and try to find the failure points in compiling NATTEN on Windows.

If you do, please install the following pip packages:

pip install pywin32 cython ninja

and then try building from the main branch while dumping stdout and sdterr into a file and sharing it with us:

git clone https://github.com/SHI-Labs/NATTEN
cd NATTEN
pip install -e . > natten-out.txt 2>&1

The reason we'd recommend the main branch is that the last few commits refactored a lot of kernels, so the logs should be easier to parse.

Thank you for your interest in NATTEN.

a-gn commented 1 year ago

Hi @alihassanijr, I got this build log (OneDrive link because it's too large for pastebin).

I have:

I tried adding -Xcompiler /permissive- in case this was a standard compliance issue and got this slightly different log.

Does this help?

alihassanijr commented 1 year ago

Thanks for sharing your logs. It looks like most of the errors are down to the dispatchers, which tells me the compiler either doesn't like the macros (which is unlikely since we follow what torch does), or that it misinterprets function/kernel names because they're not defined in headers, and only source files.

I've broken down what should be tested into two different branches, both of which compile just as well as the main branch on linux. Could you try building them on windows and sharing the log again?

First branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix

Second branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix-2

I think with a bit of luck we can figure out supporting windows pretty soon.

money6651626 commented 1 year ago

Sir,I have the same problem,but I found sth new. For the above question you can fix the file in "D:\anaconda3\envs\oral\Lib\site-packages\torch\utils\cpp_extention"(command = ['ninja', '-v] -> command = ['ninja', '--version']). But now, I found the ninja still cant build this file("D:\NATTEN\build\temp.win-amd64-cpython-38\Release\NATTEN\src\natten\csrc\cpu\na1d.obj”). log.txt I hope this will help with your fix!

money6651626 commented 1 year ago

D:\anaconda3\envs\oral\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: “HAVE_SNPRINTF”: 宏重定义 D:\anaconda3\envs\oral\include\pyerrors.h(315): note: 参见“HAVE_SNPRINTF”的前一个定义 D:\anaconda3\envs\oral\lib\site-packages\torch\include\ATen/cpu/vec/vec256/vec256_bfloat16.h(12): warning C4068: 未知的杂注“GCC” D:\anaconda3\envs\oral\lib\site-packages\torch\include\ATen/cpu/vec/vec256/vec256_bfloat16.h(13): warning C4068: 未知的杂注“GCC” D:\anaconda3\envs\oral\lib\site-packages\torch\include\ATen/cpu/vec/vec256/vec256_bfloat16.h(782): warning C4068: 未知的杂注“GCC” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C3409: 不允许空特性块 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2143: 语法错误: 缺少“]”(在“&”的前面) D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2059: 语法错误:“&” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C2143: 语法错误: 缺少“;”(在“{”的前面) D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2059: 语法错误:“{” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C2143: 语法错误: 缺少“)”(在“;”的前面) D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2046: 非法的 case D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2047: 非法的 default D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2043: 非法 break D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C2059: 语法错误:“)” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2059: 语法错误:“break” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2059: 语法错误:“case” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C2909: “pointwise_neighborhood_1d_bias”: 函数模板的显式实例化需要返回类型 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): error C4430: 缺少类型说明符 - 假定为 int。注意: C++ 不支持默认 int D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C7538: pointwise_neighborhood_1d_bias 不是变量模板 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2440: “初始化”: 无法从“initializer list”转换为“int” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): note: 初始值设定项包含过多元素 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2143: 语法错误: 缺少“;”(在“<”的前面) D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2086: “int natten::pointwise_neighborhood_1d_bias”: 重定义 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): note: 参见“natten::pointwise_neighborhood_1d_bias”的声明 D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(55): error C2059: 语法错误:“:” D:\NATTEN\src\natten\csrc\cpu\na1d.cpp(67): fatal error C1003: 错误计数超过 100;正在停止编译 this is the error log about na1d.

alihassanijr commented 1 year ago

@money6651626 Thanks for your help. Could you kindly confirm which branch you're on?

money6651626 commented 1 year ago

@alihassanijr The main branch

alihassanijr commented 1 year ago

@money6651626 Thanks. Could you try the two branches I shared above and see if the issue persists?

First branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix

Second branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix-2

Just as an fyi, the issue is clearly the compiler failing at our kernel dispatchers written in macros, which was mostly inspired by PyTorch's native backend at the time.

eanson023 commented 1 year ago

I had the same problem, I couldn't install this package on windows

alihassanijr commented 1 year ago

@eanson023 Unfortunately we don't have the ability to test on Windows at this time, and as a result have been unable to support it. That said we'd be happy to look at your logs and help figure out the issue.

Could you kindly try building using one of these branches and let us know if your issue gets resolved?

First branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix

Second branch: https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix-2

You would just need to clone my fork, checkout to the branch, and attempt to compile with:

pip3 install -e . > natten-out.txt 2>&1
eanson023 commented 1 year ago

Hi @alihassanijr :

I provide two files, which are the log output of the installation command executed under the windows-fix and windows-fix-2 branches of NATTEN-Torch, you can take a look:

windows-fix: natten-out-windows-fix.txt

windows-fix-2: natten-out-windows-fix-2.txt

I hope this will help with your fix!

Best regards, Eanson

alihassanijr commented 1 year ago

Thanks @eanson023 .

Could you try this one as well?

https://github.com/alihassanijr/NATTEN-Torch/tree/windows-fix-3

eanson023 commented 1 year ago

Hi @alihassanijr :

Big Bang! I successfully installed natten using the windows-fix-3 branch! Below is the log output:

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Obtaining file:///D:/YanSheng/NATTEN-Torch
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: packaging in d:\anaconda\envs\ys-tagm\lib\site-packages (from natten==0.14.7.dev0) (23.1)
Installing collected packages: natten
  Running setup.py develop for natten
Successfully installed natten-0.14.7.dev0
alihassanijr commented 1 year ago

That's awesome! Thank you @eanson023 !

Could you also run unit tests to see if CUDA doesn't have any issues?

python -m unittest discover -v -s ./tests
eanson023 commented 1 year ago

Hi @alihassanijr :

The following is the log output of the unit test execution. One of the test failures is caused by insufficient memory in my gpu, and the others are ok.

natten-test-out.txt

alihassanijr commented 1 year ago

Thanks for your help @eanson023 @money6651626 @a-gn @neurosynapse @smpark64 . I'll merge the branch in a bit, and mark this as complete.

money6651626 commented 1 year ago

@alihassanijr Unfortunately, I install it successful,but it can't work on cuda. torch_version:1.11.0 PyTorch built with:

money6651626 commented 1 year ago

cuda_log.txt d:\natten\src\natten\functional.py in forward(ctx, query, key, rpb, kernel_size, dilation) 117 query = query.contiguous() 118 key = key.contiguous() --> 119 attn = _C.natten2dqkrpb_forward(query, key, rpb, kernel_size, dilation) 120 ctx.save_for_backward(query, key) 121 ctx.kernel_size = kernel_size

RuntimeError: NATTEN is not compiled with CUDA! Please make sure you installed correctly by referring to shi-labs.com/natten.

alihassanijr commented 1 year ago

Looks like CUDA's not detected when you install, because the CPU tests appear to be running fine. However, I'm a bit confused because the error file you shared should not ever occur at the same time with the error in the message.

Can you confirm which it is?

money6651626 commented 1 year ago

oh,sorry the txt is the message of "make" natten and "make test". and the later one is the error message of my test code. In general, it seem to not work on the cuda in my pc. I am thinking about whether it is the issue of my version being incompatible.(so I create a new env in my conda(torch 1.12 & cuda11.3), but there are still the same issues .)By the way,My GPU is RTX 3070. the all log under it. cuda_log.txt

alihassanijr commented 1 year ago

No worries. As I said, NATTEN appears to have compiled successfully, it just failed to detect the CUDA compiler, and compiled the CPU sources only. Could you kindly run these in an interactive python session and share the output?

import torch
from torch.utils.cpp_extension import CUDA_HOME
has_cuda = torch.cuda.is_available()
print(f"Torch build: {torch.__version__}")
print(f"Has CUDA? {has_cuda}")
print(f"CUDA Home: {CUDA_HOME}")
print(torch.__config__.show())

By the way, since you used make, I'm assuming you're using WSL. If that is true, could you also run these in your WSL terminal with your conda environment activated?

which pip
which pip3
which python
which python3
money6651626 commented 1 year ago

for the first one:

Torch build: 1.11.0 Has CUDA? True CUDA Home: None PyTorch built with:

alihassanijr commented 1 year ago

Can you try building with FORCE_CUDA=1:

FORCE_CUDA=1 make

And see if it succeeds?

The problem is that torch is not setting CUDA_HOME on your end, but still recognizes CUDA being available, which is not uncommon, but the fix depends on where your CUDA driver and compiler is installed.

While your at it, could you also run these?

which nvcc
whereis cuda
money6651626 commented 1 year ago

I found the CUDA Home is None (seem not correct ,so I set it.) now it become CUDA Home: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3 natten-out.txt it seem to back the origin

money6651626 commented 1 year ago

(base) PS D:\NATTEN> FORCE_CUDA=1 make FORCE_CUDA=1 : The term 'FORCE_CUDA=1' is not recognized as the name of a cmdlet, function, script file, or operable pr ogram. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1

money6651626 commented 1 year ago

(base) PS D:\NATTEN> which nvcc /cygdrive/c/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.3/bin/nvcc (base) PS D:\NATTEN> whereis cuda cuda:

alihassanijr commented 1 year ago

Looks like your compiler is not compatible. I can't really recommend anything because I don't really know how CUDA is set up on Windows, but you could try installing it via conda (again don't know how it would play out with windows).

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include\crt/host_config.h(160): fatal error C1189: #error:  -- unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2019 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be     used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
money6651626 commented 1 year ago

OK, I will try to do it, If there is any progress, I will discuss it with you again

alihassanijr commented 1 year ago

I know there's more than one conda package named CUDA. Just install this cuda toolkit and I think Python/Pytorch should pick up the correct compiler and CUDA_HOME:

conda install -c "nvidia/label/cuda-11.3.0" cuda-toolkit
money6651626 commented 1 year ago

Oh!!!!,you remind me, my Microsoft Visual Studio version is 2022, its too high for it ,and I reset it to 2019,and now its worked! Thanks a lot,and have a nice day!

eanson023 commented 1 year ago

Hi @alihassanijr :

Does natten have a pure pytorch version that can help us see the specific implementation details?

Best regards, Eanson

alihassanijr commented 1 year ago

@eanson023 we used to, but not anymore. The implementation using the python interface is just not practical, not even for the smallest of problems. As a result, we just removed it (I think over a year ago.)

Attention patterns like this (i.e. sliding window, or more generally, dynamic context) should never be implemented in a python interface, similar to why you should never write a matrix multiplication (or a better example is discrete convolution) in python. You can try to write it, but it's not going to be of much use.

eanson023 commented 1 year ago

Hi @alihassanijr :

Sincerely thank you for your answer. I have another question to ask you, what software or tools are used to calculate the relative memory usage comparison chart and running time comparison chart in the paper? I now want to add natten to our research.

Best regards, Eanson

alihassanijr commented 1 year ago

@eanson023 Good question.

We typically benchmark latency by using the PyTorch profiler. We wrote scripts on top of the profiler to be able to benchmark multiple test cases at the same time, and dump the latency values into a CSV file. Some of those scripts will be released in the near future as a reference.

Once you have the latency values between two methods, you'd just divide one by the other to get the relative improvement. For instance, if method A runs at 1 millisecond, and method B runs at 2 milliseconds on the same problem size, A is 100% faster or is 200% the speed of B.

Memory is slightly more complicated. I think there definitely exists a tool in the NVIDIA Nsight toolkit that would report that per kernel. In our case, we used pytorch_memlab to look at end-to-end memory usage given either an entire layer or an entire model.

And of course, when benchmarking either memory usage or throughput, it's always a good idea to make sure no other user process is occupying the GPU. In addition, if you're benchmarking anything with a native PyTorch operation or module, it's a good idea to do warmup steps.

Let me know if you have any other questions, and good luck on your research!

eanson023 commented 1 year ago

Hi @alihassanijr :

Thanks again for your patient answer, it helped me a lot, thanks!

Best regards, Eanson

zxl1203 commented 1 year ago

Hello @alihassanijr, I'm very interested in your research and I am trying to install natten on Windows from source. Though I have tried the approaches you have mentioned,there are still errors. I would appreciate it if you could give some advice. Here is the log. natten-out.txt

alihassanijr commented 1 year ago

I'd be happy to hep @zxl1203 . Unfortunately the error file you shared only appears to include the standard output, which only indicates that building failed, and doesn't give me any information about what caused it.

The only thing I'm seeing is that cmake setup successfully, but building itself failed, which is good news in some ways, since I expected that to be the first point of failure.

Could you kindly try piping stdout and stderr into a file and sharing that?

You could also go into the build/ subdirectory that was created by the setup script, run make and see if that helps?

My guess is that given you're on Windows, it could actually be possible that you don't have make installed. If that is the case, you can try and install it in your conda envrionment:

conda install -c anaconda make

or via PyPi:

pip install make
zxl1203 commented 1 year ago

Hello @alihassanijr , I'm very happy to receive your response. I'm sorry that maybe I didn't illustrate my error clearly. Following your advice, I have installed make , but the error still appears. From the stdout, I think the error arises in setup.py, line 196. But I have no idea what caused it. Also I tried to run build/make,the result is make: *** 没有指明目标并且找不到 makefile。 停止. Here is my log which contains my stdin. log.txt I'm hearing from you soon.