issues
search
SciSharp
/
LLamaSharp
A C#/.NET library to run LLM models (🦙LLaMA/LLaVA) on your local device efficiently.
https://scisharp.github.io/LLamaSharp
MIT License
2.02k
stars
272
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
feat: separate the sequence and conversation.
#740
AsakusaRinne
opened
1 day ago
4
[Feature]: 不同的LLM模型,代码要以怎样的方式融合到项目里
#739
dogvane
opened
3 days ago
3
feat: support dynamic native library loading in .NET standard 2.0.
#738
AsakusaRinne
opened
3 days ago
0
LLAVA Configuration
#737
hswlab
closed
2 days ago
4
Exposed basic timing information from llama.cpp
#736
martindevans
closed
3 days ago
0
Less Sampler Allocations
#735
martindevans
closed
2 days ago
1
How to better provide system information for LLMs
#734
K1tiK4
opened
4 days ago
2
A minor optimization of the code.
#733
AsakusaRinne
closed
4 days ago
0
Add debug mode of LLamaSharp
#732
AsakusaRinne
opened
4 days ago
0
Add unit test about long context
#731
AsakusaRinne
opened
4 days ago
2
docs: update README.md
#730
eltociear
closed
5 days ago
0
Fix context params defaults
#729
dlyz
closed
5 days ago
1
Improved Example Docs
#728
martindevans
closed
6 days ago
0
[BUG]: WSL2 has problem running LLamaSharp with cuda11
#727
AsakusaRinne
opened
6 days ago
0
KernelMemory bug fix
#726
zsogitbe
closed
4 days ago
3
Fix cublas build action
#725
martindevans
closed
1 week ago
0
[BUG]: Linux cuda version detection could be incorrect
#724
AsakusaRinne
opened
1 week ago
0
ci: add windows benchmark test.
#723
AsakusaRinne
closed
1 week ago
0
[BUG]: Answer stop abruptly after contextsize, even with limiting prompt size
#722
kikipoulet
opened
1 week ago
1
Make `LLamaKvCacheView` Safe
#721
martindevans
closed
6 days ago
0
ci: add benchmark test.
#720
AsakusaRinne
closed
1 week ago
0
Remove `Conversation.Prompt(String)`
#719
martindevans
closed
1 week ago
0
Several updates to web project
#718
Lamothe
closed
1 day ago
5
Android Backend Support
#717
AmSmart
opened
1 week ago
3
[BUG]: DefragThreshold default is not matching llama.cpp and probably not intended
#716
dlyz
closed
5 days ago
6
Llama Text Templater
#715
martindevans
closed
6 days ago
2
Implement context shifting in executor base
#714
ksanman
closed
1 week ago
6
May 2024 Binary Update (Take 2)
#712
martindevans
closed
4 days ago
24
Optional IHistoryTransform added to ChatSession.InitializeSessionFromHistoryAsync
#711
Norne9
closed
2 weeks ago
0
ci: add workflow to check the spellings.
#710
AsakusaRinne
closed
2 weeks ago
1
ci: add a workflow to check code format.
#709
AsakusaRinne
closed
2 weeks ago
3
Add LLaMA3 chat session example.
#708
AsakusaRinne
closed
2 weeks ago
0
[Feature]: Support for Function Calling or Tools
#707
dcostea
opened
2 weeks ago
1
Take multiple chat templates into account
#705
AsakusaRinne
opened
2 weeks ago
0
[CI] Add more unit test to ensure the the outputs are reasonable
#704
AsakusaRinne
opened
2 weeks ago
3
LLava Async Loading
#703
martindevans
closed
2 weeks ago
1
Interruptible Async Model Loading With Progress Monitoring
#702
martindevans
closed
2 weeks ago
1
Fix typo in issue templates.
#701
AsakusaRinne
closed
2 weeks ago
0
Add issue templates.
#700
AsakusaRinne
closed
2 weeks ago
0
[Feature] Allow async model loading and cancellation
#699
AsakusaRinne
closed
2 weeks ago
0
Slightly Safer Quantize Params
#698
martindevans
closed
2 weeks ago
0
Fixed Minor Issues With Model Loading
#697
martindevans
closed
2 weeks ago
0
Removed Unnecessary Constructor From Safe Handles
#696
martindevans
closed
2 weeks ago
0
Android Backend
#695
AmSmart
opened
3 weeks ago
2
Mamba
#694
JoaoVictorVP
opened
3 weeks ago
10
Namespace should be consistent
#693
AsakusaRinne
opened
3 weeks ago
0
feat: add experimental auto-download support.
#692
AsakusaRinne
opened
3 weeks ago
1
Empty batch check
#691
martindevans
closed
2 weeks ago
0
How to rebuild LLamaSharp backends
#690
kuan2019
opened
3 weeks ago
2
SemanticKernel: Correcting non-standard way of working with PromptExecutionSettings
#689
zsogitbe
closed
2 weeks ago
2
Next