intel / xFasterTransformer

Apache License 2.0
344 stars 60 forks source link

[bug] Met some problems while following *step by step tutorial* #328

Closed lum1n0us closed 4 months ago

lum1n0us commented 4 months ago

Recently, I met some problems when following step by step tutorial. Hope to get second look and clarification.

All below commands are running in a container from cesg-prc-registry.cn-beijing.cr.aliyuncs.com/xfastertransformer/xfastertransformer:dev-ubuntu22.04(as tutorial's suggestion)

Convert LLM model $ python ./tools/opt_convert.py -i ./data/opt-1.3b-hf -o ./data/opt-1.3b-cpu

There is actually no ./tools/opt_convert.py in https://github.com/intel/xFasterTransformer.git. Although I located another opt_convert.py in src/xfastertransformer/tools/opt_convert.py, it is also not working.

$ mkdir build && cd build $ cmake .. $ make -j

I've tried latest commit (3b2e8b15b24bc0d80f5a8c15c223b5f8f92e84fe), v1.5.0 tag and v1.4.0 tag. There are same compiling error in all three:

In file included from /root/xfastertransformer/src/kernels/attention_kernels.cpp:16:
/root/xfastertransformer/src/utils/decoder_util.h:19:10: fatal error: mkl.h: No such file or directory
   19 | #include <mkl.h>
shanzhou2186 commented 4 months ago

Please follow the REAMD in the repo to convert the model. python -c 'import xfastertransformer as xft; xft.LlamaConvert().convert("${HF_DATASET_DIR}","${OUTPUT_DIR}")' You can replace the LlamaConvert with OptConvert.

lum1n0us commented 4 months ago

Thanks. I have passed model conversion. But still being blocked by compiling error about missing headers

now, it is

src/utils/numa_allocator.cpp:17:10: fatal error: numa.h: No such file or directory 17 | #include

After using yum install numactl-devel, it has been resolved.

Now the only problem is missing mkl.h

In file included from xFasterTransformer/src/searchers/sample_search.cpp:16: xFasterTransformer/src/utils/decoder_util.h:19:10: fatal error: mkl.h: No such file or directory 19 | #include