mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
26.2k stars 1.96k forks source link

Support Rockchip RK3588 / NPU / Mali-G610 GPU #2205

Open adamcohenhillel opened 6 months ago

adamcohenhillel commented 6 months ago

Is your feature request related to a problem? Please describe. I am trying to run LocalAI more efficiently on a Orange Pi 5B, which have an NPUm and Arm Mali-G610 GPU.

More specifically: CPU | • 8-core 64-bit processor• Big.Little Architecture: 4 Cortex-A76 and 4 Cortex-A55, Big core cluster is 2.4GHz, and Little core cluster is 1.8GHz frequency. GPU | • Arm Mali-G610 MP4• Compatible with OpenGL ES1.1/2.0/3.2, OpenCL 2.2 and Vulkan 1.2• 3D graphics engine and 2D graphics engine NPU | Built-in AI accelerator NPU with up to 6 TOPS, supports INT4/INT8/INT16 mixed operation (reference ) There are different frameworks that supports taking advantage of the NPU / GPU on that board to accelerate models inference, and I was wondering if it is possible to integrate that into LocalAI as well?

There is also an interesting discussion on this topic on the Ollama repo: https://github.com/ggerganov/llama.cpp/issues/722

Rockchip are pretty cheap, and can be a great for edge AI stuff. Would be great to have support for that !!

localai-bot commented 6 months ago

Hello,

Thank you for opening this issue, and thank you for providing the detailed specifications of the Orange Pi 5B that you are trying to optimize. It's great to hear that Rockchip devices like this offer promising capabilities for edge AI applications.

I will mention this feature request to the maintained of the LocalAI project, @mudler, and the community will evaluate the possibilities of incorporating support for Rockchip RK3588, NPU, and the Mali-G610 GPU into the project.

Please feel free to add any further information or clarification if needed. We will keep you updated on any progress regarding this request.

This conversation is being managed by a bot, an experiment of @mudler.

fakezeta commented 6 months ago

Hi @adamcohenhillel, thank you for your interest in LocalAI.

The discussion is on llama.cpp repo, not Ollama and LocalAI already uses llama.cpp as one of the backends so as soon as it will be supported from the upstream repo it will be supported also here on LocalAI.

Let's monitor the updates from the discussion you linked!

nekkitl commented 6 months ago

+1

docjyJ commented 5 months ago

+1