Open iProgramme opened 11 months ago
got same errorw
const { GoogleGenerativeAI } = require('@google/generative-ai')
const genAI = new GoogleGenerativeAI('XXX')
async function run () {
const model = genAI.getGenerativeModel({ model: 'gemini-pro'})
const prompt = 'Write a story about a AI.'
const result = await model.generateContent(prompt)
const response = await result.response
const text = response.text()
console.log(text)
}
run()
output:
node:internal/process/promises:289 triggerUncaughtException(err, true / fromPromise /); ^
TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11730:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async makeRequest (/Users/janber/Documents/demo/test/node_modules/@google/generative-ai/dist/index.js:195:20) at async generateContent (/Users/janber/Documents/demo/test/node_modules/@google/generative-ai/dist/index.js:525:22) at async handle (/Users/janber/Documents/demo/test/ai/test.js:10:18)
Node.js v20.10.0
same error, how to fix it?
Hello, GoogleGenerativeAI module how to use HttpsProxyAgent? thx.
const { GoogleGenerativeAI } = require("@google/generative-ai"); // Access your API key as an environment variable (see "Set up your API key" above) const genAI = new GoogleGenerativeAI('xxxxxxxxxxxxxxxx'); async function run() { // For text-only input, use the gemini-pro model const model = genAI.getGenerativeModel({ model: "gemini-pro" }); const prompt = "Write a story about a magic backpack." const chat = await model.startChat(''); const result = await chat.sendMessageStream(prompt); let text = '' for await (const chunk of result.stream) { text += chunk.text(); } console.log(text); } run();
Try replacing:
const chat = await model.startChat("");
with:
const chat = model.startChat({});
I've copy-pasted the entire sample code as provided in the original post, inserted my API key, and run it using @google/generative-ai@0.1.3
and Node 18.18.0 and was unable to reproduce it.
This seems odd because the fetch
call is wrapped in a try/catch
where any error should be caught and wrapped in GoogleGenerativeAIError
(line 91) with some additional text, but in these error logs, fetch seems to be throwing an uncaught exception that is bypassing this try/catch, as I don't see any of the error wrapper text that the catch
block should add.
https://github.com/google/generative-ai-js/blob/main/packages/main/src/requests/request.ts#L68
According to https://github.com/nodejs/undici/issues/1248 there should be a cause
property on the error, if you have a way to catch it and inspect it.
For anyone who wants to make small modifications to the node_modules code themselves that you think will fix it, there's always https://www.npmjs.com/package/patch-package.
@objectIsNotDefined Please revoke your API key. I edited the comment, but since it's leaked you need to invalidate it.
@iProgramme @lzghades @objectIsNotDefined from which region you are trying to access Gemini API? Available regions: https://ai.google.dev/available_regions
Not important, but if you are running async function from the main module, then you should handle Promise rejection e.g., like that:
run().catch((e) => {
console.error(e);
process.exit(1)
})
@xuzhiyang
试了一下只能改sdk里的indexjs了,先安装npm i undici,然后修改代码 const { ProxyAgent } = require('undici');
async function makeRequest(url, body) {
let response;
try {
const proxyHost = 'xxx';
const proxyPort = xxx;
const proxyUrl = http://${proxyHost}:${proxyPort}
;
const client = new ProxyAgent(proxyUrl);
response = await fetch(url.toString(), {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-goog-api-client": getClientHeaders(),
"x-goog-api-key": url.apiKey,
},
body,
dispatcher: client
});
@xuzhiyang @lzghades There are a lot of programs that do not use the system proxy (such as node fetch). In such cases, you need to use tun mode or deploy a proxy on your router. And it is also common to patch node_modules with tools like patch-package, yarn patch, or pnpm patch
const { GoogleGenerativeAI } = require("@google/generative-ai"); // Access your API key as an environment variable (see "Set up your API key" above) const genAI = new GoogleGenerativeAI('xxxxxxxxxxxxxxxx'); async function run() { // For text-only input, use the gemini-pro model const model = genAI.getGenerativeModel({ model: "gemini-pro" }); const prompt = "Write a story about a magic backpack." const chat = await model.startChat(''); const result = await chat.sendMessageStream(prompt); let text = '' for await (const chunk of result.stream) { text += chunk.text(); } console.log(text); } run();
Try replacing:
const chat = await model.startChat("");
with:
const chat = model.startChat({});
got the same error.
I use it in China,and I still got the same error when using VPN to proxy to Taiwan or Japan
@xuzhiyang @lzghades There are a lot of programs that do not use the system proxy (such as node fetch). In such cases, you need to use tun mode or deploy a proxy on your router. And it is also common to patch node_modules with tools like patch-package, yarn patch, or pnpm patch
The VPN software I am using is Clash Verge, and I have selected a Japanese node. I have tried enabling the tun mode in the terminal once, and also attempted to directly activate the tun mode using Clash Verge (reference link: Clash Verge教程). However, I still encounter the same error
what do you mean by enabling the tun mode in the terminal once
? i feel you did not enable it at all. anyway i am confident that's the solution to your problem and you should search more tutorials about that, which is totally off the topic here. @iProgramme
what do you mean by
enabling the tun mode in the terminal once
? i feel you did not enable it at all. anyway i am confident that's the solution to your problem and you should search more tutorials about that, which is totally off the topic here. @iProgramme
So, I reported that error in China because the Chinese region does not support accessing Google. Is it possible to use VPN and enable tun mode to be effective?
我用中文再重复一遍吧,对于墙一类的问题,在任何repo提都不合适,因为他们什么也做不了解决不了。而你要做的是使用合理的代理工具和代理模式。正如我上面所言,很多程序和应用并不走系统代理,所以一般的proxy对于他们并不生效,这里也是这样一种情况。tun模式和路由代理是常见的更底层的透明代理,可以用于解决这类情况,具体怎么配置,碰到哪些环境问题,怎么调试等等,远远超出我几个字能解释的范围,所以建议你去别处搜索和学习。
楼主问题还没解决?下面是解决方法
google api:
const { setGlobalDispatcher, ProxyAgent } = require("undici");
const dispatcher = new ProxyAgent({ uri: new URL(process.env.https_proxy).toString() });
//全局fetch调用启用代理
setGlobalDispatcher(dispatcher);
随便跟你说下openai的
const { HttpsProxyAgent } = require('https-proxy-agent');
const openai = new OpenAI({
apiKey: XXXXXXXXX,
httpAgent: new HttpsProxyAgent(process.env.https_proxy) // 设置代理
});
PS: 记得在控制台设置https_proxy变量
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
楼主问题还没解决?下面是解决方法 google api:
const { setGlobalDispatcher, ProxyAgent } = require("undici");
const dispatcher = new ProxyAgent({ uri: new URL(process.env.https_proxy).toString() });
//全局fetch调用启用代理
setGlobalDispatcher(dispatcher);
随便跟你说下openai的
const { HttpsProxyAgent } = require('https-proxy-agent');
const openai = new OpenAI({
apiKey: XXXXXXXXX,
httpAgent: new HttpsProxyAgent(process.env.https_proxy) // 设置代理
});
PS: 记得在控制台设置https_proxy变量
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
我成功了,如下图。 我的解决方案是把 process.env.https_proxy 直接改成我的地址 http://127.0.0.1:7890。所以说,如果是运行在一个项目中,可以在运行时将地址设置到环境变量中来解决该问题
完整代码如下
const { GoogleGenerativeAI } = require("@google/generative-ai");
const { setGlobalDispatcher, ProxyAgent } = require("undici");
const dispatcher = new ProxyAgent({ uri: new URL('http://127.0.0.1:7890').toString() });
setGlobalDispatcher(dispatcher);
// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI('xxxxxxxxxxxxx');
async function run() {
// For text-only input, use the gemini-pro model
const model = genAI.getGenerativeModel({ model: "gemini-pro"});
const prompt = "Write a story about a magic backpack."
const result = await model.generateContent(prompt);
const response = await result.response;
const text = response.text();
console.log(text);
}
run();
参考:https://www.hehehai.cn/posts/set-nextjs-fetch-proxy-agent
@google/generative-ai@0.2.1 版本支持传递
requestOptions
来设置请求参数, 虽然这次更新主要为了支持timeout
,但是依然可以传递其他参数。import { GoogleGenerativeAI } from '@google/generative-ai' import { ProxyAgent } from 'undici' const genAI = new GoogleGenerativeAI(ENV.GEMINI_AI_API_KEY || '') const geminiStream = await genAI .getGenerativeModel({ model: 'gemini-pro' }, { agent: ENV.LOCAL_FETCH_PROXY ? new ProxyAgent(ENV.LOCAL_FETCH_PROXY) : undefined, } as any) .generateContentStream(...)
In fact, the fetchOptions parameter is currently not initialized with parameters other than timeout https://github.com/google/generative-ai-js/blob/c230732baa03ea19d146b598154ebbc5c87e1708/packages/main/src/requests/request.ts#L110-L119
I got a similar error too, is there any solution? (still error if using gemini-1.5-pro
).
This error appears when I send pdf and video files (images work fine, audio isn't tested)
I was facing the same issue.
I switched to node version 20
from 18
, it started to work.
this repo support the proxy feature: https://github.com/wujohns/google-generative-ai
My solution to not breaking the code structure is:
const http = require('http')
const httpProxy = require('http-proxy')
const { HttpsProxyAgent } = require('https-proxy-agent')
const proxy = httpProxy.createProxyServer()
const agent = new HttpsProxyAgent('http://127.0.0.1:7890')
const server = http.createServer(function (req, res) {
// req.headers.host = 'api.openai.com'
req.headers.host = 'generativelanguage.googleapis.com'
proxy.web(req, res, {
target: 'https://generativelanguage.googleapis.com', // google
// target: 'https://api.openai.com/v1', // openai
agent,
})
proxy.on('error', function (err) {
console.log(err)
})
})
server.listen(8080, '0.0.0.0')
Then use it in the code:
const { GoogleGenerativeAI } = require("@google/generative-ai");
// Access your API key as an environment variable (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI('xxxxxxxxxxxxxxxx');
async function run() {
// For text-only input, use the gemini-pro model
const model = genAI.getGenerativeModel({ model: "gemini-pro" }, { baseUrl:'http://127.0.0.1:8080' }); // setup baseUrl
const prompt = "Write a story about a magic backpack."
const chat = await model.startChat('');
const result = await chat.sendMessageStream(prompt);
let text = ''
for await (const chunk of result.stream) {
text += chunk.text();
}
console.log(text);
}
run();
Actual Behavior
Steps to Reproduce the Problem
Specifications