Open manisnesan opened 1 year ago
Intro to the Kaggle competition
Benchmarking with GPT3.5
openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
])
Using the OpenAI function calling API to enforce structure on answers
# Define the function(s) the model will be able to use (in this case, only one)
functions = [
{
"name": "answer_question",
"description": "Answers the provided question",
"parameters": {
"type": "object",
"properties": {
"reasoning": {
"type": "string",
"description": "Reasining for what the answer could be. Keep it short."
},
"answers": {
"type": "array",
"items": {
"type": "string",
"enum": ["A", "B", "C", "D", "E"],
},
"description": "Your top 3 guesses, from most to least likely. e.g. ['A', 'D', 'C']"
}
},
"required": ["reasoning", "answers"],
},
}
]
Using Llama2 as a classifier by examining the logits (next token predictions)
Using perplexity to evaluate question-answer pairs
Differential Learning rates and LORA - notebook by Wayde
RAG with additional dataset from Chris Deotte
? quantized to 8 bits
Transformers - Primer by aman.ai
🔹 GPT: http://gpt.aman.ai
🔹 BERT: http://bert.aman.ai
https://www.kaggle.com/competitions/kaggle-llm-science-exam
Check the solution posts
https://www.kaggle.com/competitions/kaggle-llm-science-exam/discussion/446414
[x] Jeremy Twitter thread
Training set 200 science multiple choice questions autogenerated using GPT 3.5
RAG pattern
No Retriever: Use LM alone. Pass the question alone directly to GPT 3.5 using llm library.
OpenAI got the wrong answer for the following
followed by multiple choice questions.
Above 2 step approach is a way to get better results
Enable Page Context Feature from bing
Tricks
Example
Question: {r.prompt} A: {r.A} B: {r.B} C: {r.C} D: {r.D} E: {r.E}