Our recent work ToolEmu uses language models to emulate environments and external tool executions for identifying the risks of language agents, which is highly relevant to the topics covered in this repo. We'd appreciate its inclusion in your repo!
I've included our work to several relevant sections including:
Trustworthy: ToolEmu tests the risks of LLM agents with external tool use
Hi,
Thank you for managing this valuable repo!
Our recent work ToolEmu uses language models to emulate environments and external tool executions for identifying the risks of language agents, which is highly relevant to the topics covered in this repo. We'd appreciate its inclusion in your repo!
I've included our work to several relevant sections including:
Please feel free to modify or reposition any of the content I've added as you see fit
Thanks, Yangjun