TejasQ / idli

Your AI command-line copilot
GNU General Public License v3.0
48 stars 0 forks source link

Make API requests more transparent #2

Open TejasQ opened 4 months ago

TejasQ commented 4 months ago

Currently, the CLI uses my API because I use it for error alerting but this is too opaque as pointed out in this YouTube video comment.

Let's make this more transparent, by creating a Netlify function in this repo that proxies the request to OpenAI and handles any error handling appropriately.

SamehElalfi commented 4 months ago

Hi Tejas,

Thank you for bringing up this concern. It's great that you're considering making the process more transparent and improving error handling. Have you considered leveraging OpenAI's official library from NPM directly instead of creating a proxy function?

Using the official library could simplify the implementation and maintenance process, as it's specifically designed to interact with OpenAI's API and likely includes built-in error-handling mechanisms. This approach would also ensure that the integration stays up-to-date with any changes or improvements made to the official library.

If you're interested, Here is a short example of how you could use the OpenAI library directly in this project.

First, install OpenAI using PNPM:

pnpm add openai

Then, modify how we get the command:

// util\getCommand.ts

import os from "node:os";
import OpenAI from "openai";
import type { Config } from "../types";

type Options = {
  config: Config;
  prompt: string;
};

export async function getCommand({ config, prompt }: Options) {
  const openAi = new OpenAI({
    apiKey: config.openAiApiKey,
  });

  // The following information about the current operating system should be used
  // in the prompt to get more accurate results.
  const osInfo = {
    type: os.type(),
    platform: os.platform(),
    release: os.release(),
    arch: os.arch(),
  };

  const content = `Act as a CLI but with AI. I will give you a description about what I want to do and you should give me a single command to run it in the terminal. 
  Here is some information about my OS: ${JSON.stringify(osInfo)} 

  The task: ${prompt}
  `;

  const response = await openAi.chat.completions.create({
    model: "gpt-4-turbo",
    messages: [{ role: "system", content }],
  });

  // get the command using regex with the template.
  const commandRegex = /```.*\n([\s\S]+?)(\n)?```/;
  const result = response.choices[0].message.content?.match(commandRegex);

  return result?.at(1) ?? "couldn't generate command";
}

Please note that:

Let me know if you'd like to explore this option further.

dylanjha commented 4 months ago

The first thing I noticed here was that my Open AI key would be sent to the server https://tej.as

Currently, the CLI uses my API because I use it for error alerting that proxies the request to OpenAI

Is it really just proxying & for error alerting? Or is the server doing more to prepare the actual prompt that gets sent to Open AI?

https://github.com/TejasQ/idli/blob/5d00c05ee3a8204204d48d398e3ee01547c1529a/util/getCommand.ts#L10-L24

If I'm understanding this correctly, the server is maybe parsing the os part of the request body, and inserting those details along with some other language into the prompt that ultimately gets sent to Open AI?