Open mzywl opened 4 months ago
Hi, please check this out.
here is the full params of run.py:
usage: run.py [-h] [--config CONFIG] [--org ORG] [--task TASK] [--name NAME] [--model MODEL]
argparse
optional arguments:
-h, --help show this help message and exit
--config CONFIG Name of config, which is used to load configuration under CompanyConfig/; Please see CompanyConfig Section below
--org ORG Name of organization, your software will be generated in WareHouse/name_org_timestamp
--task TASK Prompt of your idea
--name NAME Name of software, your software will be generated in WareHouse/name_org_timestamp
--model MODEL GPT Model, choose from {'GPT_3_5_TURBO','GPT_4','GPT_4_32K'}
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
How to call different models at different stages
Currently we do not support calling different LLM backends in different phases. You can refer to https://github.com/OpenBMB/ChatDev/issues/27
How to call other models in the art phase