One of the feature I think this is missing is the ability to have a "script" or UI that supports testing/comparting models and temperatures. It might be easier to do with a script and just have it output the response. The use case is to test various prompts and models easily. I would want to send the same prompt against gpt4, 3.5 turbo, llama, maybe with various temperatures to see the responses.
One of the feature I think this is missing is the ability to have a "script" or UI that supports testing/comparting models and temperatures. It might be easier to do with a script and just have it output the response. The use case is to test various prompts and models easily. I would want to send the same prompt against gpt4, 3.5 turbo, llama, maybe with various temperatures to see the responses.