-
> [!NOTE]
> This issue is aimed at those attending the [RubyConf 2024 Hack Day](https://github.com/Shopify/ruby-lsp/discussions/2758)
Ruby LSP currently has an experimental chat agent:
https://githu…
-
Hi, I've built a chat application using llama_index typescript version.
Now, I want to add a reranker. I've tested in the python version the [llm_rerank](https://github.com/run-llama/llama_index/bl…
-
So that people will clearly see which LLMs work well with Letta and which do not.
Similar idea with
https://aider.chat/docs/leaderboards/
Meanwhile 'Berkeley Function-Calling Leaderboard' is…
-
https://github.com/FunAudioLLM/CosyVoice/blob/main/cosyvoice/llm/llm.py
```
for i in range(max_len):
y_pred, att_cache, cnn_cache = self.llm.forward_chunk(lm_input, offset=offset, require…
-
The Spark Cognitive Model, developed by iFLYTEK, represents a significant leap in the field of artificial intelligence.
https://www.xfyun.cn/doc/spark/HTTP%E8%B0%83%E7%94%A8%E6%96%87%E6%A1%A3.ht…
-
Hi,
is it possible to use vLLM endpoint for OpenAI where we can set the base_url instead of OpenAI?
I had a similar issue with Weave where I wanted to trace local LLMs. Would be great if it’s supp…
-
Give LLM the ability to browse the Web and searech for Information it needs to fulfill the users request
-
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "EMPTY"
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/ll…
-
The LLM produces output (hypotheses, a final report, significant events, etc) after it has completed its analysis. We should allow the user to interact with some portion of the LLM output (the hypothe…
-
# trtllm-bench --model models/Llama-2-7b-hf throughput --dataset experiments/synthetic_128_128.txt --engine_dir models/Llama2-7b-trt-engine
[TensorRT-LLM] TensorRT-LLM version: 0.15.0.dev2024111200
…