issues
search
mgonzs13
/
llama_ros
llama.cpp (GGUF LLMs) and llava.cpp (GGUF VLMs) for ROS 2
MIT License
161
stars
27
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
ChatLlamaROS stream fix and demo
#11
agonzc34
closed
2 months ago
0
ChatLlamaROS (w/ image support) for Langchain
#10
agonzc34
closed
2 months ago
0
Langchain Llava Node instead of Llama Node
#9
kyle-redyeti
opened
4 months ago
6
Issue building with CUDA enabled inside a docker container on Jetson Orin NX
#8
kyle-redyeti
opened
4 months ago
7
Rompe dependencia en ROS2 foxy, por defecto mete la 24 que no es compatible
#7
b0rh
closed
7 months ago
0
Llava
#6
mgonzs13
closed
8 months ago
0
Schema converter
#5
mgonzs13
closed
8 months ago
0
Create stablelm.launch.py
#4
smellslikeml
closed
10 months ago
1
how to select Cmake version to build.
#3
shengyuwoo
closed
1 year ago
1
gguf
#2
mgonzs13
closed
1 year ago
0
Is it possible to implement Llama2
#1
walkhan
closed
1 year ago
1