Closed wugren closed 2 months ago
I believe this needs to be analyzed on a case-by-case basis. My perspective is that unless there are very clear accuracy requirements that should be calculated, LLM should be used as much as possible where it can be used. As a CPU, the robustness of LLM is really impressive.
Many times in applications, deterministic results are required, but LLM cannot guarantee to return deterministic results every time, at least this is my experience. What we make is a system. If a user wants to develop an app based on it, but the user just can’t find the appropriate prompt word, you can’t tell the user that you are not capable enough and stop developing it. You should give the user a way to solve the problem.
In version 0.5.2, we are granting more flexibility in how the LLM Process and Agent behaviors are configured. This enhancement should provide users with increased adaptability to suit their specific needs. Thank you for your continued support!
Since there are still some flaws in the existing LLM capabilities, it is difficult to implement some complex logic by LLM decision-making, and it will be more stable to implement it through code.