issues
search
meta-llama
/
llama-stack
Composable building blocks to build Llama Apps
MIT License
4.63k
stars
586
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
docs: include ollama link in quickstart
#504
sarthakgupta072
closed
3 hours ago
1
Add Ollama inference mocks
#503
vladimirivic
opened
16 hours ago
0
ollama distro: can't find shield
#502
heyjustinai
opened
18 hours ago
0
Updating Zero-to-hero-guide
#501
init27
closed
16 hours ago
0
Fix fp8 quantization script.
#500
liyunlu0618
closed
23 hours ago
1
use logging instead of prints
#499
dineshyv
closed
20 hours ago
0
Fix fp8 quantization script.
#498
liyunlu0618
closed
1 day ago
0
Since we are pushing for HF repos, we should accept them in inference configs
#497
ashwinb
closed
1 day ago
0
Don't depend on templates.py when print llama stack build messages
#496
ashwinb
closed
1 day ago
0
O'zbekistonda turib amerikanskiy logist dispechirlar qancha daromad topish mumkin
#495
Kadamboy
opened
1 day ago
1
Restructure docs
#494
dineshyv
closed
1 day ago
2
append the error in the internal server error
#493
chuenlok
opened
1 day ago
1
Make run yaml optional so dockers can start with just --env
#492
ashwinb
closed
1 day ago
0
register with provider even if present in stack
#491
dineshyv
closed
1 day ago
1
Adding memory provider mocks
#490
vladimirivic
closed
9 hours ago
1
fall to back to read from chroma/pgvector when not in cache
#489
dineshyv
closed
1 day ago
0
rebase
#488
init27
closed
2 days ago
0
add changelog
#487
dineshyv
closed
2 days ago
0
Added optional md5 validate command once download is completed
#486
varunfb
closed
2 days ago
0
Support Tavily as built-in search tool.
#485
iseeyuan
closed
2 days ago
4
map llama model -> provider model id in ModelRegistryHelper
#484
mattf
opened
2 days ago
0
Update Ollama supported llama model list
#483
hickeyma
opened
3 days ago
1
[wip] Add cache for PGVector memory adapter
#482
hickeyma
opened
3 days ago
0
support adding alias for models without hf repo/sku entry
#481
dineshyv
closed
3 days ago
0
fix core model ids for ollama
#480
dineshyv
closed
2 days ago
1
fix llama stack build for together & llama stack build from templates
#479
yanxi0830
closed
3 days ago
0
Add version to REST API url
#478
ashwinb
closed
3 days ago
0
get stack run config based on template name
#477
dineshyv
closed
3 days ago
1
Update kotlin client docs
#476
Riandy
closed
3 days ago
0
Colab links in Zero-Hero would work?
#475
HamidShojanazeri
closed
18 hours ago
2
Adding memory provider test fakes
#474
vladimirivic
opened
3 days ago
2
Fix incorrect ollama port in ollama run.yaml template
#473
vladimirivic
closed
3 days ago
1
Allow models to be registered as long as llama model is provided
#472
dineshyv
closed
3 days ago
0
add quantized model ollama support
#471
wukaixingxp
closed
3 days ago
1
remove pydantic namespace warnings using model_config
#470
mattf
closed
3 days ago
0
[Agentic Eval] add ability to run agents generation
#469
yanxi0830
closed
3 days ago
0
Auto-generate distro yamls + docs
#468
ashwinb
closed
3 days ago
0
update quick start to have the working instruction
#467
chuenlok
closed
2 days ago
1
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
#466
Drizzle2001
opened
6 days ago
0
Extend shorthand support for the `llama stack run` command
#465
vladimirivic
closed
6 days ago
0
fix faiss serialize and serialize of index
#464
dineshyv
closed
6 days ago
0
await initialize in faiss
#463
dineshyv
closed
6 days ago
0
Enable more supported models with ollama
#462
yanxi0830
opened
6 days ago
1
Fix conda env names in distribution example run files
#461
hickeyma
closed
3 days ago
1
[Adding new Inference provider] get error at runtime when the stack code is looking for a model to serve
#460
sanjayk-github-dev
opened
1 week ago
12
move hf addapter->remote
#459
yanxi0830
closed
1 week ago
0
unregister for memory banks and remove update API
#458
dineshyv
closed
1 week ago
6
Add a verify-download command to llama CLI
#457
ashwinb
closed
1 week ago
0
Fix build configure deprecation message
#456
hickeyma
closed
1 week ago
0
Add ability for local persistence for memory adapters
#455
yanxi0830
opened
1 week ago
5
Next