BAAI-Agents / Cradle

The Cradle framework is a first attempt at General Computer Control (GCC). Cradle supports agents to ace any computer task by enabling strong reasoning abilities, self-improvment, and skill curation, in a standardized general environment with minimal requirements.
https://baai-agents.github.io/Cradle/
MIT License
1.71k stars 149 forks source link

关于few shots的问题 #9

Closed qinxiangyujiayou closed 5 months ago

qinxiangyujiayou commented 6 months ago

非常感谢您杰出的工作和及时高效的代码分享。 我有一个小问题想请教您。 在openai处理prompt的代码中:

for i, paragraph in enumerate(filtered_paragraphs): if constants.IMAGES_INPUT_TAG in paragraph: image_introduction_paragraph_index = i image_introduction_paragraph = paragraph break

paragraph_input = params.get(constants.IMAGES_INPUT_TAG_NAME, None)

看起来似乎提示词的few shots部分并没有当成图像来处理。 请问few shots部分是直接作为文本输入的,还是我在理解代码时出现了问题。 因为没有购买游戏,所以我暂时无法运行代码来验证,如果您可以给我解答,我将非常感激。 再次感谢您非常精彩的工作。

DVampire commented 6 months ago

Hi, a message that we call GPT-4V API is divided into 4 parts:

  1. system message: Provide a comprehensive introduction to GPT-4V, mainly describing the games GPT-4V is currently playing and its role positioning.
  2. user message part1: This is the text description part before the few-shot examples that include images, such as the current task definition and description. Clearly, it cannot be used as an instruction for the image. Therefore, following logical order, we place this preceding description of the few-shot example within a user message.
  3. image introduction message:This includes some few-shot examples, images, and their instructions. Because few-shot examples may contain replies from GPT-4V as assistant messages, we combine the few shots and the new images and their prompts into this part of the message.
  4. user message part2: This part is the specific prompt, for example, some observable information, historical information about reflection, including the constraints and formats of the output.

Finally, all the message items will be combined into a complete message to call the GPT-4V API to get response.

You can refer to the template of the prompts: https://github.com/BAAI-Agents/Cradle/tree/main/res/prompts

A minor note: To ensure everyone can participate in the discussion, it would be better if communication could be in English. Thank you very much for your attention and support.

qinxiangyujiayou commented 6 months ago

Hi, a message that we call GPT-4V API is divided into 4 parts:

  1. system message: Provide a comprehensive introduction to GPT-4V, mainly describing the games GPT-4V is currently playing and its role positioning.
  2. user message part1: This is the text description part before the few-shot examples that include images, such as the current task definition and description. Clearly, it cannot be used as an instruction for the image. Therefore, following logical order, we place this preceding description of the few-shot example within a user message.
  3. image introduction message:This includes some few-shot examples, images, and their instructions. Because few-shot examples may contain replies from GPT-4V as assistant messages, we combine the few shots and the new images and their prompts into this part of the message.
  4. user message part2: This part is the specific prompt, for example, some observable information, historical information about reflection, including the constraints and formats of the output.

Finally, all the message items will be combined into a complete message to call the GPT-4V API to get response.

You can refer to the template of the prompts: https://github.com/BAAI-Agents/Cradle/tree/main/res/prompts

A minor note: To ensure everyone can participate in the discussion, it would be better if communication could be in English. Thank you very much for your attention and support.

Thank you very much for your timely reply and patient explanation. Fortunately, your explanation of the prompt handling process closely aligns with my previous understanding. However, there is still one issue that perplexes me:

In the section of your code related to the image introduction message:

paragraph_input = params.get(constants.IMAGES_INPUT_TAG_NAME, None)

if I understand correctly, it exclusively extracts the content tagged as IMAGES_INPUT_TAG_NAME (i.e., 'image_introduction') from params. Yet, I have not found the code responsible for extracting the "few shot" sections, such as those within decision_making.json:

"few_shots": [ { "introduction": "Here are some examples of the positions of the bounding box shown in the image.", "path": "", "assistant": "" },…]

DVampire commented 6 months ago

The few_shots field is reserved for updating extensions. In the current version, this field is not used.

In the current version few shot examples are included in the image_introduction. So only the image_introduction field is parsed.

Thank you.