BIDARA is a GPT-4 chatbot that was instructed to help scientists and engineers understand, learn from, and emulate the strategies used by living things to create sustainable designs and technologies using the Biomimicry Institute's step-by-step design process.
The agent output from each step is chained together, so the context window is occasionally exceeded. Possible solution is safeguarding the context window by always passing the last 2048 tokens to the model (or whatever the size of the context window is)
Output is somewhat lengthy, maybe can reduce the number of tokens generated per step
Adding more tools to the agent such as Google search, Google Scholar paper search, AskNature paper search, or other academic databases
Through my testing, I've noticed that the agent tends to perform the Paper Retrieval step towards the end of the reasoning steps (i.e. the agent goes through Biologize->Discover->Emulate->etc..->Paper Retrieval) rather than using the Paper Retrieval to guide the reasoning process (i.e. PaperRetrieval -> Emulate -> etc.) A solution to this could be instructing the agent to use the PaperRetrieval tool within the system prompt. An example of Tool Importance prompting within langchain is outlined here: https://python.langchain.com/docs/modules/agents/tools/custom_tools#defining-the-priorities-among-tools
Potential Next Steps for Future Work: