copilot-extensions / gh-debug-cli

CLI tool that lets you chat with your agent locally for faster feedback and debugging
MIT License
10 stars 0 forks source link

Bad request 400: While using the LLM request through OpenAI #8

Open ambilykk opened 3 weeks ago

ambilykk commented 3 weeks ago

While using the LLM using OpenAI, we encountered the Bad request error.

Code Used

const capiClient = new OpenAI({
    baseURL: "https://api.githubcopilot.com/",
    apiKey: tokenForUser,
    headers: {
      "Copilot-Integration-Id": "copilot-chat" 
    },
  });
  console.log("capiclient request");
  const response = await capiClient.chat.completions.create({
    stream: false,
    model: "gpt-4o",
    messages: [{
      role: "user",
      content: "What is GitHub Copilot"}]
  });

Error Message Image

Work-around

Once we replace the module function with fetch and hardcoded the Copilot-Integration-Id, it start working.

const copilotResponse = await fetch(
    "https://api.githubcopilot.com/chat/completions",
    {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        "Authorization": `Bearer ${tokenForUser}`,
        "Copilot-Integration-Id": "vscode-chat",
      },
      body: JSON.stringify({
        messages: [{
          role: "user",
          content: "What is GitHub Copilot"}],
        max_tokens: 50,
        temperature: 0.5
      }),
    }
  );