JetQin / nextjs-dash

MIT License
0 stars 0 forks source link

Screenshots #1

Open JetQin opened 1 year ago

JetQin commented 1 year ago

screen1 screen2

JetQin commented 1 year ago

screen1 screen2

JetQin commented 3 months ago

In Prompt engineering, "zero shot" means ( ) A. The model processes tasks without any examples. B. The model requires multiple examples to complete the task. C. Only a small number of examples are learned. D. None of the above. Answer: A The main difference between "few-shot" learning and "zero shot" learning is that ( ) A. "Few-shot" requires more example data. B. "Zero shot" requires more training. C. "Few-shot" learning uses a smaller number of examples. D. "Few-shot" does not rely on example data. Answer: C In the application of language models, "zero shot" is often used in ( ) A. Complex multi-task processing. B. Simple and general tasks. C. Professional tasks in specific fields. D. Cannot be applied to actual tasks. Answer: B In which of the following situations is "few-shot" more suitable than "zero shot"? ( ) A. The task is highly general. B. The task requires very precise results. C. Data resources are extremely scarce. D. The model has undergone a large amount of training. Answer: B For new and uncommon tasks, the performance of "zero shot" is usually ( ) A. Better than "few-shot". B. The same as "few-shot". C. Worse than "few-shot". D. Depends on the model architecture. Answer: C When conducting few-shot learning, the distribution of examples may ( ) A. Always affect the performance of the model B. Never affect the performance of the model C. Possibly affect the performance of the model in some way D. Only have an impact on specific models Answer: C To avoid bias in the model in the sentiment classification task, it is recommended to ( ) A. Provide more negative examples B. Provide more positive examples C. Provide a balanced number of positive and negative examples D. Only use a small number of examples Answer: C When conducting few-shot learning, the order of examples may ( ) A. Always affect the performance of the model B. Never affect the performance of the model C. Possibly affect the performance of the model in some way D. Only have an impact on complex tasks Answer: C To reduce the impact of the example order on the model, it is recommended to ( ) A. Arrange the examples in a specific order B. Arrange the positive examples first and then the negative examples C. Arrange the negative examples first and then the positive examples D. Randomly arrange the examples Answer: D For more difficult tasks that the model is less familiar with, it may ( ) A. Be more prone to bias B. Be less prone to bias C. Be the same as simple tasks D. Not be affected by the distribution and order of examples Answer: A

JetQin commented 3 months ago

When using LLM, which parameter can be adjusted to control the randomness of the model's output? A. Temperature B. Top P C. Max Length D. Stop Sequences For tasks like fact-based QA, which value of temperature is more suitable? A. Lower B. Higher C. It doesn't matter D. Depends on the specific situation If you want to generate more diverse responses, which parameter should you increase? A. Temperature B. Top P C. Max Length D. Stop Sequences Which of the following is the function of the frequency penalty? A. To increase the repetition of words B. To reduce the repetition of words C. To control the length of the response D. To control the structure of the response When using the frequency or presence penalty, what is the general recommendation? A. Alter both of them B. Alter the frequency penalty but not the presence penalty C. Alter the presence penalty but not the frequency penalty D. Alter either the frequency or presence penalty but not both

JetQin commented 3 months ago

A Temperature can be adjusted to control the randomness of the model's output. A For fact-based QA, a lower temperature is more suitable to get more deterministic and consistent responses. A Increasing the temperature parameter leads to more diverse responses. B The frequency penalty is used to reduce the repetition of words. D When using the frequency or presence penalty, it's recommended to alter either the frequency or presence penalty but not both.

JetQin commented 3 months ago

Which of the following is NOT an element of a prompt? A. Instruction B. Context C. Input Data D. Model Architecture 答案:D 解释:根据文本内容,prompt 的元素包括 Instruction(指令)、Context(上下文)、Input Data(输入数据)和 Output Indicator(输出指示器),不包括 Model Architecture(模型架构)。

JetQin commented 3 months ago

What does zero-shot prompting mean? A. The prompt contains many examples to guide the model. B. The prompt directly instructs the model to perform a task without any additional examples. C. The model is trained with a small amount of data. D. The model is not capable of performing the task without examples. 答案:B 解释:根据文本内容,Zero-shot prompting 意味着与模型交互的提示不会包含示例或演示,而是直接指示模型执行任务,无需任何额外的示例来引导它,所以答案是 B。

JetQin commented 3 months ago

What is few-shot prompting? A. A technique where the prompt contains many examples to guide the model. B. A technique where the model is trained with a small amount of data. C. A technique where the prompt directly instructs the model to perform a task without any examples. D. A technique where the model is not capable of performing the task even with examples. 答案:A 解释:根据文本内容,Few-shot prompting 是一种通过在提示中提供示例来引导模型进行学习和更好地执行任务的技术,所以答案是 A。

JetQin commented 3 months ago

RAG is introduced to address which of the following tasks? A. Common tasks that don't require additional background knowledge. B. Simple tasks that can be completed by general-purpose language models. C. Complex and knowledge-intensive tasks that need access to external knowledge sources. D. All tasks that can be performed by language models. 答案:C 解释:根据文本内容,对于更复杂和知识密集型的任务,需要构建一个基于语言模型的系统,该系统能够访问外部知识源来完成任务,而 RAG 就是为了解决这类任务而引入的,所以答案是 C。

JetQin commented 3 months ago

When using OpenAI chat models like gpt - 3.5 - turbo or gpt - 4, which of the following roles can be used to structure the prompt? A. only user B. user and assistant C. system, user, and assistant D. system and user 答案:C 解释:根据文本内容,当使用 OpenAI 的聊天模型如 gpt - 3.5 - turbo 或 gpt - 4 时,可以使用 system、user 和 assistant 这三种角色来构建提示,所以答案是 C。

Here’s a breakdown of the System, User, and Assistant roles in the context of integrating with OpenAI:

System Role: What it does: The System role is used to provide setup information or context that informs the behavior of the model. This can include instructions or guidelines on how the conversation should proceed.

When to use it: You use this role to set the stage for the interaction. For example, if you want the model to maintain a formal tone throughout the conversation or if you need to specify rules like avoiding certain topics.

Example:

System: The assistant should always maintain a professional tone and avoid discussing personal opinions on politics. User Role: What it does: This role represents the human user in the conversation. Inputs from the user guide the conversation and prompt responses from the assistant.

When to use it: Whenever the human user is making a statement or asking a question. This is the most frequent role used in a standard interaction.

Example:

User: Can you explain how to integrate OpenAI’s API with my existing Python application? Assistant Role: What it does: This is the role of the model itself, responding to user inputs based on the context set by the system.

When to use it: This role is automatically assumed by the model when it replies to the user’s queries or follows the system’s instructions.

Example:

Assistant: Sure, to integrate OpenAI’s API with your Python application, you’ll start by installing the OpenAI Python package using pip... Why not just type everything in one big prompt?

Clarity and Structure: Using roles helps keep the conversation clear and structured. It delineates who is saying what, which is crucial when designing complex interactions.

Context Management: It allows the model to maintain and separate different types of context (instructions vs. queries).

Control and Flexibility: You have better control over how the conversation evolves.

You’re not training the model directly through these interactions; rather, you’re guiding it to respond appropriately within the context you set.

Hope that clears things up!

JetQin commented 3 months ago

Comparison of GCP, Azure, and Alibaba GenAI certifications.pdf

JetQin commented 2 months ago
comparison