janhq / cortex.cpp

Local AI API Platform
https://cortex.so
Apache License 2.0
2.07k stars 116 forks source link

epic: llama.cpp is installed by default #1217

Closed dan-homebrew closed 1 month ago

dan-homebrew commented 1 month ago

Goal

Cortex.cpp should have a super easy UX to on par with market alternatives

Idea

I wonder whether the solution to this is a way to have an optional local lookup, as part of cortex engines install:

Out-of-scope (future)

Outcomes

Key Questions

Appendix

Why?

Our current cortex.cpp v0.1 onboarding UX is not user friendly:

Image

Image

dan-homebrew commented 1 month ago

@hiento09 has worked on a PR that moves the llama.cpp Engine Install to Installer, but I am concerned that it's still not great UX

https://github.com/janhq/cortex.cpp/pull/1219/files

namchuai commented 1 month ago

Just saw this today. "Engine not loaded yet" does not mean the engine is not yet downloaded. It might have problem with the engine loading logic.

0xSage commented 1 month ago

QA Updates (v75)

vansangpfiev commented 1 month ago

We are downloading the CUDA dependencies that Nvidia driver supports. After the https://github.com/janhq/cortex.cpp/issues/1085 has been resolved, we should update CUDA dependencies logic for installer:

cc: @hiento09 @dan-homebrew

gabrielle-ong commented 1 month ago

QA v123 ✅ Mac ✅ Windows ✅ linux

Image Image Image