issues
search
ahyatt
/
llm
A package abstracting llm capabilities for emacs.
GNU General Public License v3.0
135
stars
19
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Open WebUI compatibility
#50
LemonBreezes
opened
22 hours ago
0
Add support for HTTP/HTTPS proxies
#49
r0man
closed
1 month ago
1
Error using ollama through a proxy
#48
theasp
closed
1 month ago
7
[FR] Support JSON mode
#47
NightMachinery
closed
2 months ago
2
Plz
#46
r0man
closed
2 months ago
8
[Q/FR] "Legacy" Completion
#45
NightMachinery
opened
2 months ago
32
fix: OpenAI API keys passed as multibyte strings
#44
hraban
closed
2 months ago
2
Context does not get sent if list of interactions ≥2
#43
hraban
closed
2 months ago
27
Change some function names and make most of them private
#42
r0man
closed
2 months ago
1
Run handler code via a timer in the main loop
#41
r0man
closed
3 months ago
20
Update media type and event source packages
#40
r0man
closed
3 months ago
2
Plz
#39
r0man
closed
3 months ago
2
Feature request/idea: Use generic functions to extract provider errors
#38
r0man
closed
3 months ago
10
Error callback not called if url request failed
#37
s-kostyaev
closed
3 months ago
1
Using make-llm-openai-compatible with Azure OpenAI service fails to connect to endpoint
#36
KaiHa
closed
3 months ago
7
Error handling & JSON parsing
#35
r0man
closed
3 months ago
8
llm-embedding ignores host with ollama provider
#34
s-kostyaev
closed
3 months ago
0
Vertex streaming and buffer killing
#33
r0man
closed
3 months ago
3
Using `make-llm-openai-compatible` with Mistral AI fails parsing the partial responses
#32
KaiHa
closed
3 months ago
1
Fix issue with JSON array parser nor emitting all objects
#31
r0man
closed
3 months ago
3
Plz event type symbol
#30
r0man
closed
3 months ago
0
Strip plz changes and add JSON array stream media type
#29
r0man
closed
3 months ago
22
Add support for the application/x-ndjson media type
#28
r0man
closed
3 months ago
0
Use Plz in OpenAI provider
#27
r0man
closed
3 months ago
4
Add Plz
#26
r0man
closed
3 months ago
7
Fix ollama mentioned instead of llama.cpp
#25
SmallAndSoft
closed
4 months ago
1
Add CI
#24
s-kostyaev
closed
4 months ago
7
[feature] Anthropic / Claude2 Support
#22
robertmeta
closed
3 months ago
4
Use split-string instead of string-split in llm-fake
#21
s-kostyaev
closed
4 months ago
1
llm-gemini.el is using the non-streaming URL for llm-chat-streaming
#20
whhone
closed
5 months ago
1
Unparseable buffer saved to *llm-vertex-unparseable*
#19
whhone
closed
5 months ago
6
Increase the default request timeout from 5 to 10
#18
whhone
closed
5 months ago
2
LLM request timed out for Gemini
#17
whhone
closed
5 months ago
0
Ollama chat endpoint support
#16
tquartus
closed
5 months ago
6
llama2 embedding vectors from Python and llm.el don't seem to match
#15
jkitchin
closed
6 months ago
4
Use tiktoken.el for token counting of openai's models
#14
zkry
opened
6 months ago
6
Don't ask the user before canceling the query
#12
Stebalien
closed
6 months ago
1
Return request buffer from `llm-request-async'
#11
Stebalien
closed
6 months ago
2
function pos-eol breaks compatibility with emacs 28.1
#10
s-kostyaev
closed
7 months ago
1
Add ability to change open api base url
#9
s-kostyaev
closed
6 months ago
7
[feature] Support for the llamacpp server?
#8
draxil
closed
8 months ago
3
Add support for https when using Ollama
#7
mprasil
closed
8 months ago
2
Provide a way to cancel queries
#6
Stebalien
closed
6 months ago
5
ollama context fix
#5
s-kostyaev
closed
8 months ago
2
Feature request: add ability to customize ollama's host
#4
s-kostyaev
closed
8 months ago
2
Provide example code for ollama provider
#3
s-kostyaev
closed
8 months ago
22
Ollama support
#2
roman
closed
8 months ago
13
fixing some typos in readme
#1
tvraman
closed
11 months ago
5