Open djaffer opened 1 month ago
To resolve the 400 error related to unmarshalling a JSON array into a Go struct field CompletionRequest.prompt
of type string when using the OpenAI module in LangChain.js, ensure that the prompts
parameter passed to the _generate
method is an array of strings. The error occurs because the CompletionRequest.prompt
field in the Go struct expects a string, but it is receiving an array.
Here is the relevant part of the code where the prompts
parameter is used:
async _generate(
prompts: string[],
options: this["ParsedCallOptions"],
runManager?: CallbackManagerForLLMRun
): Promise<LLMResult> {
const requestStartTime = Date.now();
const generations = await super._generate(prompts, options, runManager);
for (let i = 0; i < generations.generations.length; i += 1) {
const requestEndTime = Date.now();
const parsedResp = {
text: generations.generations[i][0].text,
llm_output: generations.llmOutput,
};
const promptLayerRespBody = await promptLayerTrackRequest(
this.caller,
"langchain.PromptLayerOpenAI",
{ ...this._identifyingParams(), prompt: prompts[i] } as any,
this.plTags,
parsedResp,
requestStartTime,
requestEndTime,
this.promptLayerApiKey
);
let promptLayerRequestId;
if (this.returnPromptLayerId === true) {
if (promptLayerRespBody && promptLayerRespBody.success === true) {
promptLayerRequestId = promptLayerRespBody.request_id;
}
generations.generations[i][0].generationInfo = {
...generations.generations[i][0].generationInfo,
promptLayerRequestId,
};
}
}
return generations;
}
Ensure that the prompts
parameter is correctly formatted as an array of strings when calling the _generate
method. For example:
const prompts = ["Your first prompt", "Your second prompt"];
const options = {}; // Your options here
const runManager = undefined; // Or your run manager instance
const result = await promptLayerOpenAIInstance._generate(prompts, options, runManager);
By ensuring that prompts
is an array of strings, you can avoid the 400 error related to unmarshalling a JSON array into a Go struct field CompletionRequest.prompt
of type string [1].
To continue talking to Dosu, mention @dosu.
Checked other resources
Example Code
Tested with OpenAI library direct and it works perfectly fine.
Error Message and Stack Trace (if applicable)
} [llm/error] [1:llm:OpenAI] [270ms] LLM run errored with error: "400 json: cannot unmarshal array into Go struct field CompletionRequest.prompt of type string\n\nError: 400 json: cannot unmarshal array into Go struct field CompletionRequest.prompt of type string\n at APIError.generate (/home/langchain/node_modules/openai/error.js:45:20)\n at OpenAI.makeStatusError (/home/langchain/node_modules/openai/core.js:275:33)\n at OpenAI.makeRequest (/home/langchain/node_modules/openai/core.js:318:30)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async /home/langchain/node_modules/@langchain/openai/dist/llms.cjs:517:29\n at async RetryOperation._fn (/home/langchain/node_modules/p-retry/index.js:50:12)" node:internal/process/promises:289 triggerUncaughtException(err, true / fromPromise /); ^
BadRequestError: 400 json: cannot unmarshal array into Go struct field CompletionRequest.prompt of type string at APIError.generate (/home/langchain/node_modules/openai/error.js:45:20) at OpenAI.makeStatusError (/home/langchain/node_modules/openai/core.js:275:33) at OpenAI.makeRequest (/home/langchain/node_modules/openai/core.js:318:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async /home/langchain/node_modules/@langchain/openai/dist/llms.cjs:517:29 at async RetryOperation._fn (/home/langchain/node_modules/p-retry/index.js:50:12) { status: 400, headers: { }, request_id: undefined, error: { message: 'json: cannot unmarshal array into Go struct field CompletionRequest.prompt of type string', type: 'invalid_request_error', param: null, code: null }, code: null, param: null, type: 'invalid_request_error', attemptNumber: 1, retriesLeft: 6 }
Description
Error in response when using OpenAI library and not ChatOpenAI.
System Info