-
### Feature Request
A functionality to copy or clone the saved chat with an AI model, so we have 2 identical chats. This could be used to experiment different questions with the exact same starting…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
response, history = model.chat(tokenizer, "你好", history=[]) 报错TypeError: 'NoneType' object i…
-
best_model = tuner.get_best_models(num_models=1)
best_hp = tuner.get_best_hyperparameters()[0]
hypermodel = MyHyperModel()
model = hypermodel.build(best_hp)
hypermodel.fit(
best_hp, mod…
-
### Request for Assistance: Implementing AI Response Function with History Management
#### Objective
I am working on a function that should take a `String` parameter (prompt) and return a `String`…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
How can I integrate EvaluatorChain with an existing Ru…
-
### System Info / 系統信息
Uncaught exception: Traceback (most recent call last): File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\main.py", line 288, in main for response, chat_history in client…
-
Looking for some advice on how to implement revision history.
### User Story
A user/moderator clicks on a series page. They see that another user has spammed the page with nonsense (or information t…
-
Model Performance Improvements
- [x] change text-completion model from gpt-4o paid to groq-llama3.1-70b-versatile, reason: fast_inference, unpaid but rate limit. better for deployment.
- [x] chang…
-
Hi @Gofinge , I was just about to ask how to measure latency and memory usage, and I saw the question#221. Thank you for your answer, but I still don't understand how to measure latency and memory usa…