-
So that people will clearly see which LLMs work well with Letta and which do not.
Similar idea with
https://aider.chat/docs/leaderboards/
Meanwhile 'Berkeley Function-Calling Leaderboard' is…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
https://github.com/FunAudioLLM/CosyVoice/blob/main/cosyvoice/llm/llm.py
```
for i in range(max_len):
y_pred, att_cache, cnn_cache = self.llm.forward_chunk(lm_input, offset=offset, require…
-
Hi,
is it possible to use vLLM endpoint for OpenAI where we can set the base_url instead of OpenAI?
I had a similar issue with Weave where I wanted to trace local LLMs. Would be great if it’s supp…
-
The Spark Cognitive Model, developed by iFLYTEK, represents a significant leap in the field of artificial intelligence.
https://www.xfyun.cn/doc/spark/HTTP%E8%B0%83%E7%94%A8%E6%96%87%E6%A1%A3.ht…
-
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "EMPTY"
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/ll…
-
The LLM produces output (hypotheses, a final report, significant events, etc) after it has completed its analysis. We should allow the user to interact with some portion of the LLM output (the hypothe…
-
Give LLM the ability to browse the Web and searech for Information it needs to fulfill the users request
-
# Large Language Model in Action - 1 LLM 的前世今生
[https://wangwei1237.github.io/LLM_in_Action/llm_intro.html](https://wangwei1237.github.io/LLM_in_Action/llm_intro.html)
-
The results obtained through an LLM using the same prompt are not always the same. Even defining a value of 0 in the temperature. How did you handle this situation to make comparisons of results in yo…