Remember in general and more in an academic setting to include the sources of your information. (7 points)
(10 points)
(15 points)
Prompt 3 is not a prompt chain, it is something like this:
response = model.generate_content(
"Can you give me a list of the most popular places in Colombia?, please include only the list as bullet points no extra text."
)
print(response.text)
chain_node_1_response = model.generate_content(
f"""
Can you add a description to each of the places in this list?
{response.text}
"""
)
print(chain_node_1_response.text)
Where you reuse the output of the previous prompt.
Last prompt is not fine tuning, fine tuning is the process of retraining your model with a smaller dataset in hopes to improve its performance, how it is done in OpenAI is here: https://platform.openai.com/docs/guides/fine-tuning
For point 3 2 out of 4 prompts were correct so you receive half the credit 15 points
(20 points) GIving full credit but, quoting the lab "document any prompts/outputs" you didn't document scripts outputs to make sure it worked.
(15 Points) Splitting text is not summarizing, and you were expected to use an LLM to perform the summarization.
Remember in general and more in an academic setting to include the sources of your information. (7 points)
(10 points)
(15 points) Prompt 3 is not a prompt chain, it is something like this:
Where you reuse the output of the previous prompt.
Last prompt is not fine tuning, fine tuning is the process of retraining your model with a smaller dataset in hopes to improve its performance, how it is done in OpenAI is here: https://platform.openai.com/docs/guides/fine-tuning
For point 3 2 out of 4 prompts were correct so you receive half the credit 15 points
(20 points) GIving full credit but, quoting the lab "document any prompts/outputs" you didn't document scripts outputs to make sure it worked.
(15 Points) Splitting text is not summarizing, and you were expected to use an LLM to perform the summarization.
7 + 10 + 15 + 20 + 15 = 67