BIDARA is a GPT-4o chatbot that was instructed to help scientists and engineers understand, learn from, and emulate the strategies used by living things to create sustainable designs and technologies using the Biomimicry Institute's step-by-step design process.
They recommend this model for both text and vision so could be a replacement for gpt-4-1106-preview and gpt-4-1106-vision-preview that we are currently using for text and vision respectively.
Have been testing around with gpt-4o a bit. It seems to improve upon gpt-4-1106-preview significantly in a few ways, but also takes a step back a little.
Good
far better structure to responses
headers, lists, clearly stating what step you're in, etc.
makes it much more readable/digestible
The quality of the response seems to be comparable, though it "feels" better due to the structure
Time to response is noticeably faster
though not fast enough to take attempting to implement streaming out of the question
Cheaper model
Bad
With the current prompt, it often does not "Stop often (at a minimum after every step) to ask the user for feedback or clarification."
It will go from one step directly to the next without conferring with the user, typically combining "Discover" and "Abstract" into one response.
I've yet to determine the cause/fix for this, but will be testing some changes to the prompt (possibly as simple as repeating the above instruction towards the end)
requires v2 of the assistants api #98 .
They recommend this model for both text and vision so could be a replacement for gpt-4-1106-preview and gpt-4-1106-vision-preview that we are currently using for text and vision respectively.