microsoft / T-MAC

Low-bit LLM inference on CPU with lookup table
MIT License
588 stars 44 forks source link

compile llama.cpp failed #43

Closed qw1319 closed 2 months ago

qw1319 commented 2 months ago

when i compile this project on ubuntu , i miss this error: error: _Float16 is not supported on this target typedef _Float16 half;

kaleid-liner commented 2 months ago

Are you building on arm64 or x86 cpu? And please provide the complete error log of run_pipeline.py

qw1319 commented 2 months ago

i have fix this problem(for error libpath)