simonw / llm

Access large language models from the command-line
https://llm.datasette.io
Apache License 2.0
4.81k stars 266 forks source link

Truncated output with Llama model #561

Open pandeiro opened 2 months ago

pandeiro commented 2 months ago

I'm experiencing what appears to be truncation or aborted evaluation when using the Meta-Llama-3-8B-Instruct model from gpt4all:

Here's an example of what I mean:

$ llm 'Write a small JavaScript program that asks trivia questions'

 Here is a simple JavaScript program that asks trivia questions:

// Define the questions and answers
var questions = [
  { question: "What is the capital of France?", answer: "Paris" },
  { question: "Who painted the Mona Lisa?", answer: "Leonardo da Vinci" },
  { question: "What is the largest planet in our solar system?", answer: "Jupiter" }
];

// Define a function to ask a question
function askQuestion(question) {
  console.log(question.question);
  var response = prompt("Enter your answer:");
  if (response.toLowerCase() === question.answer.toLowerCase()) {
    console.log("Correct!");
  } else {
    console.log(`Sorry, that's incorrect. The correct answer is ${question.answer}.`);
  }
}

// Ask each question
questions.forEach(askQuestion);

This program defines an array of questions and answers, then uses a function `askQuestion` to ask each question and check the user's