Closed pull-ups closed 1 year ago
@pull-ups If you dive into the code, you can see that each of the agents does quite a few things in one full "cycle". They observe events, react to them, make plans (if they don't have them already), respond to messages and take an action. Each of these requires an API call, which can get expensive after a while...
To reduce the cost when running, you can modify it to use a local LLM or alternatively you can switch the DEFAULT_SMART_MODEL
parameter here to be turbo, so it will use gpt3.5-turbo
for the api calls which is 10x cheaper.
@pull-ups You can now use the turbo flag when running poetry run world --turbo
to just use gpt3.5-turbo
It's very good to see open sourced code of the research "generative agents"! I have a question for this.
in this model, how frequently the chatgpt api is called? for each action/conversation generation of all agents? If so, the number of call for chatgpt api is same as number of agents? or one api call can control all agents at once? thnks.