Step 1: Identify the common aspects between both the success and error cases.
The call to AnalyticsManager.instance.reportOpenAICall has almost the same parameters in both success and error cases.
Step 2: Create a function to report the OpenAI call to Analytics. The function will take two parameters: the result and the error. Only one of them will be defined at any given time.
So now, in the gptExecute function, replace the two calls to AnalyticsManager.instance.reportOpenAICall with calls to the reportOpenAICallToAnalytics function we created.
I will create a new function called reportOpenAICallToAnalytics that takes two parameters: result and error. This function will be responsible for calling AnalyticsManager.instance.reportOpenAICall with the appropriate parameters. Then, I will replace the two calls to AnalyticsManager.instance.reportOpenAICall in the gptExecute function with calls to the new reportOpenAICallToAnalytics function.
// Step 3: On successful run, break the loop early and return the result.
return result;
} catch (error) {
// Step 2: Add error handling for exceptions.
// Step 4: Log the error and retry the process for up to 2 more times.
console.error(`Error on attempt ${attempt}: ${error}`);
AnalyticsManager.instance.reportOpenAICall(
{
model,
messages: [
{
role: "user",
content: fullPrompt,
},
],
max_tokens: maxTokens,
temperature,
},
{
error,
}
);
reportOpenAICallToAnalytics(result);
// Step 3: On successful run, break the loop early and return the result.
return result;
} catch (error) {
// Step 2: Add error handling for exceptions.
// Step 4: Log the error and retry the process for up to 2 more times.
console.error(`Error on attempt ${attempt}: ${error}`);
reportOpenAICallToAnalytics(undefined, error);
// Step 3: On successful run, break the loop early and return the result.
return result;
} catch (error) {
// Step 2: Add error handling for exceptions.
// Step 4: Log the error and retry the process for up to 2 more times.
console.error(`Error on attempt ${attempt}: ${error}`);
AnalyticsManager.instance.reportOpenAICall(
{
model,
messages: [
{
role: "user",
content: fullPrompt,
},
],
max_tokens: maxTokens,
temperature,
},
{
error,
}
);
originalCode:
import fetch, { Response } from "node-fetch";
import * as vscode from "vscode";
import AsyncLock = require("async-lock");
import { CANCELED_STAGE_NAME } from "./ui/ExecutionInfo";
import { AnalyticsManager } from "./AnalyticsManager";
type AVAILABLE_MODELS = "gpt-4" | "gpt-3.5-turbo";
let openAILock = new AsyncLock();
/* The extractParsedLines function takes a chunk string as input and returns
an array of parsed JSON objects. */
function extractParsedLines(chunk: string) {
const lines = chunk.split("\n");
try {
return lines
.map((line) => line.replace(/^data: /, "").trim()) // Remove the "data: " prefix
.filter((line) => line !== "" && line !== "[DONE]") // Remove empty lines and "[DONE]"
.map((line) => JSON.parse(line));
} catch (e) {
console.error(Error parsing chunk: ${chunk});
console.error(e);
throw e;
}
}
/* The queryOpenAI function takes a fullPrompt and other optional parameters to
send a request to OpenAI's API. It returns a response object. */
async function queryOpenAI({
fullPrompt,
controller,
maxTokens = 2000,
model = "gpt-4",
temperature = 1,
}: {
fullPrompt: string;
maxTokens?: number;
model?: AVAILABLE_MODELS;
temperature?: number;
controller: AbortController;
}) {
const signal = controller.signal;
let apiKey = vscode.workspace.getConfiguration("10minions").get("apiKey")
if (!apiKey) {
throw new Error("OpenAI API key not found. Please set it in the settings.");
}
// Step 3: On successful run, break the loop early and return the result.
return result;
} catch (error) {
// Step 2: Add error handling for exceptions.
// Step 4: Log the error and retry the process for up to 2 more times.
console.error(Error on attempt ${attempt}: ${error});
// Step 5: On the 3rd error, give up and re-throw the error.
if (attempt === 3) {
throw error;
}
}
}
throw new Error("Assertion: Should never get here");
}
////////////////////////////////////////////////////////////////////////////////
// Stage 6: Preparing Changes (retry) ...
////////////////////////////////////////////////////////////////////////////////
I will create a new function called reportOpenAICallToAnalytics that takes two parameters: result and error. This function will be responsible for calling AnalyticsManager.instance.reportOpenAICall with the appropriate parameters. Then, I will replace the two calls to AnalyticsManager.instance.reportOpenAICall in the gptExecute function with calls to the new reportOpenAICallToAnalytics function.
//////////////////////////////////////////////////////////////////////////////// // Stage 1: Starting ... ////////////////////////////////////////////////////////////////////////////////
File: openai.ts Task: Refactor this, extract common aspect with. the other call
//////////////////////////////////////////////////////////////////////////////// // Stage 2: Understanding ... ////////////////////////////////////////////////////////////////////////////////
LocalChange
Classification: LocalChange
//////////////////////////////////////////////////////////////////////////////// // Stage 3: Conceptualising ... ////////////////////////////////////////////////////////////////////////////////
Step 1: Identify the common aspects between both the success and error cases.
AnalyticsManager.instance.reportOpenAICall
has almost the same parameters in both success and error cases.Step 2: Create a function to report the OpenAI call to Analytics. The function will take two parameters: the result and the error. Only one of them will be defined at any given time.
Here's the refactored function:
Step 3: Replace the two calls to
AnalyticsManager.instance.reportOpenAICall
with calls to our new function,reportOpenAICallToAnalytics
.Here's the final refactored code:
So now, in the gptExecute function, replace the two calls to
AnalyticsManager.instance.reportOpenAICall
with calls to thereportOpenAICallToAnalytics
function we created.//////////////////////////////////////////////////////////////////////////////// // Stage 4: Preparing Changes ... ////////////////////////////////////////////////////////////////////////////////
I will create a new function called
reportOpenAICallToAnalytics
that takes two parameters:result
anderror
. This function will be responsible for callingAnalyticsManager.instance.reportOpenAICall
with the appropriate parameters. Then, I will replace the two calls toAnalyticsManager.instance.reportOpenAICall
in thegptExecute
function with calls to the newreportOpenAICallToAnalytics
function.REPLACE AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, } );
WITH function reportOpenAICallToAnalytics(result?: string, error?: any) { AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, error, } ); }
END_REPLACE
//////////////////////////////////////////////////////////////////////////////// // Stage 5: Applying Changes ... ////////////////////////////////////////////////////////////////////////////////
Error in applying consolidation: Error: Failed replace
replaceText: AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, } );
originalCode: import fetch, { Response } from "node-fetch"; import * as vscode from "vscode"; import AsyncLock = require("async-lock"); import { CANCELED_STAGE_NAME } from "./ui/ExecutionInfo"; import { AnalyticsManager } from "./AnalyticsManager";
type AVAILABLE_MODELS = "gpt-4" | "gpt-3.5-turbo";
let openAILock = new AsyncLock();
/* The extractParsedLines function takes a chunk string as input and returns
Error parsing chunk: ${chunk}
); console.error(e); throw e; } }/* The queryOpenAI function takes a fullPrompt and other optional parameters to
send a request to OpenAI's API. It returns a response object. */ async function queryOpenAI({ fullPrompt, controller, maxTokens = 2000, model = "gpt-4", temperature = 1, }: { fullPrompt: string; maxTokens?: number; model?: AVAILABLE_MODELS; temperature?: number; controller: AbortController; }) { const signal = controller.signal;
let apiKey = vscode.workspace.getConfiguration("10minions").get("apiKey")
if (!apiKey) { throw new Error("OpenAI API key not found. Please set it in the settings."); }
console.log("Querying OpenAI"); fullPrompt.split("\n").forEach((line) => console.log(
> ${line}
));return await fetch("https://api.openai.com/v1/chat/completions", { method: "POST", headers: { "Content-Type": "application/json", Authorization:
Bearer ${apiKey}
, }, body: JSON.stringify({ model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, stream: true, }), signal, }); }/* The processOpenAIResponseStream function processes the response from the
API and extracts tokens from the response stream. */ async function processOpenAIResponseStream({ response, onChunk, isCancelled, }: { response: Response; onChunk: (chunk: string) => Promise;
isCancelled: () => boolean;
}) {
const stream = response.body!;
const decoder = new TextDecoder("utf-8");
let fullContent = "";
return await new Promise((resolve, reject) => {
stream.on("data", async (value) => {
try {
if (isCancelled()) {
stream.removeAllListeners();
reject(CANCELED_STAGE_NAME);
return;
}
const chunk = decoder.decode(value);
} catch (e) { console.error("Error processing response stream: ", e); reject(e); } });
stream.on("end", () => { if (isCancelled()) { stream.removeAllListeners(); reject(CANCELED_STAGE_NAME); return; } resolve(fullContent); });
stream.on("error", (err) => { console.error("Error: ", err); reject(err); }); }); }
/* The gptExecute function is the main exported function, which combines all the
other functions to send a GPT-4 query and receive and process the response. */ export async function gptExecute({ fullPrompt, onChunk = async (chunk: string) => {}, isCancelled = () => false, maxTokens = 2000, model = "gpt-4", temperature, controller = new AbortController(), }: { fullPrompt: string; onChunk?: (chunk: string) => Promise;
isCancelled?: () => boolean;
maxTokens?: number;
model?: AVAILABLE_MODELS;
temperature?: number;
controller?: AbortController;
}) {
// Step 1: Add a loop that iterates up to 3 times.
for (let attempt = 1; attempt <= 3; attempt++) {
try {
const response = await queryOpenAI({fullPrompt, maxTokens, model, temperature, controller});
const result = await processOpenAIResponseStream({ response, onChunk, isCancelled });
AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, attempt, } );
// Step 3: On successful run, break the loop early and return the result. return result; } catch (error) { // Step 2: Add error handling for exceptions. // Step 4: Log the error and retry the process for up to 2 more times. console.error(
Error on attempt ${attempt}: ${error}
);AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { error, attempt, } );
// Step 5: On the 3rd error, give up and re-throw the error. if (attempt === 3) { throw error; } } }
throw new Error("Assertion: Should never get here"); } //////////////////////////////////////////////////////////////////////////////// // Stage 6: Preparing Changes (retry) ... ////////////////////////////////////////////////////////////////////////////////
I will create a new function called
reportOpenAICallToAnalytics
that takes two parameters:result
anderror
. This function will be responsible for callingAnalyticsManager.instance.reportOpenAICall
with the appropriate parameters. Then, I will replace the two calls toAnalyticsManager.instance.reportOpenAICall
in thegptExecute
function with calls to the newreportOpenAICallToAnalytics
function.REPLACE AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, attempt, } ); WITH reportOpenAICallToAnalytics(result);
REPLACE AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { error, attempt, } ); WITH reportOpenAICallToAnalytics(undefined, error);
ADD function reportOpenAICallToAnalytics(result?: string, error?: any) { AnalyticsManager.instance.reportOpenAICall( { model, messages: [ { role: "user", content: fullPrompt, }, ], max_tokens: maxTokens, temperature, }, { result, error, } ); } END_REPLACE
//////////////////////////////////////////////////////////////////////////////// // Stage 7: Applying Changes (retry) ... ////////////////////////////////////////////////////////////////////////////////
CONSOLIDATION SUCCESFULY APPLIED
//////////////////////////////////////////////////////////////////////////////// // Stage 8: Applying changes as comment (fall back) ... ////////////////////////////////////////////////////////////////////////////////
//////////////////////////////////////////////////////////////////////////////// // Stage 9: Finishing ... ////////////////////////////////////////////////////////////////////////////////
Finished (Execution Time: 00:01:53)