Atome-FE / llama-node

Believe in AI democratization. llama for nodejs backed by llama-rs, llama.cpp and rwkv.cpp, work locally on your laptop CPU. support llama/alpaca/gpt4all/vicuna/rwkv model.
https://llama-node.vercel.app/
Apache License 2.0
863 stars 62 forks source link

¿Update cross-compile.mts with aarch64 architecture? #54

Open yupihello opened 1 year ago

yupihello commented 1 year ago

It is possible to upgrade cross-compiling with aarch64 linux architecture. I am doing build tests for a raspberry pi, based on ubuntu 64. There is a corresponding version of musl for aarch64.

hlhr202 commented 1 year ago

It is possible to upgrade cross-compiling with aarch64 linux architecture. I am doing build tests for a raspberry pi, based on ubuntu 64. There is a corresponding version of musl for aarch64.

@yupihello Hi, theoretically it is possible, but I dont have device for testing it. ARM64 device may need to configure different flags for different platform features(like supporting different instructions). Not sure if anyone can help with this 😅 You can clone this and see platform specific flags for CMake based on llama.cpp CMakeLists here. Then change the build.rs for llama-sys here. After all of these steps you can enable rust build target for aarch64 linux.