-
can you add beam search to your code ?
https://towardsdatascience.com/temperature-scaling-and-beam-search-text-generation-in-llms-for-the-ml-adjacent-21212cc5dddb#e148
https://github.com/mikecvet/be…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
AS A product manager
I WANT the data to be indexed
SO THAT it is searchable by a similarity search
AC:
- [ ] The clean data is vectored and added to a vector database.
- [ ] The vector database is u…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ x] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
### Is your feature request related to a problem? Please describe.
llm_config contains 3 functions:
search()
delete()
download()
Filter the functions at the agent level, like this:
search_…
-
知识库检索速度很慢,大概每次20s,是不是向量和重排没有使用GPU
![image](https://github.com/user-attachments/assets/41269d8b-3d5b-4196-9d6d-68538f3da7a1)
-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [x] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
### Project Name
educAIte
### Description
## Project Overview
EducAIte is a web application designed to simplify text extraction and document interaction, specifically for educational purposes. By…
-
### Duplicates
- [X] I have searched the existing issues
### Summary 💡
The LLM call blocks should have a dropdown for model and provider, in order to enable users to use specific providers for mode…
-
### System Info
cpu intel 14700k
gpu rtx 4090
tensorrt_llm 0.13
docker tritonserver:24.09-trtllm-python-py3
### Who can help?
@Tracin
### Information
- [X] The official example scri…