Closed kaboomskizzle closed 1 year ago
Did you create an index on app.pinecone.io
?
Is server/.env.development
populated with your pinecone environment, API key, and index?
PINECONE_ENVIRONMENT="asia-southeast1-gcp-free"
PINECONE_API_KEY="YOUR_API_KEY"
PINECONE_INDEX="docs"
Are the rest of the vector database env vars commented out?
server/.env.development
was not populated, just the .env
Added all details to .env.development
yes the rest of vector database env vars are left mostly commented out as per default
GET http://localhost:3001/system/system-vectors Status 500 Internal Server Error VersionHTTP/1.1 Transferred335 B (21 B size) Referrer Policystrict-origin-when-cross-origin
GET http://localhost:3001/system/system-vectors Status 500 Internal Server Error VersionHTTP/1.1 Transferred335 B (21 B size) Referrer Policystrict-origin-when-cross-origin
Did you restart the server once you updated this information
also make sure that only one .env.development VECTOR_DB
key is present and not commented out!
Did you restart the server once you updated this information also make sure that only one .env.development
VECTOR_DB
key is present and not commented out!
Yes I restarted server.
Here is what I have in the .env.development file:
`SERVER_PORT=3001 OPEN_AI_KEY=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 OPEN_MODEL_PREF='gpt-3.5-turbo' CACHE_VECTORS="true"
VECTOR_DB="pinecone" PINECONE_ENVIRONMENT=us-west4-gcp-free PINECONE_API_KEY=xyzpdqXXXXXXXXXXXXXXXXXX PINECONE_INDEX=se-internal-knowledge
The server console logged this after I tried sending a message:
yarn dev:server
yarn dev:server
yarn run v1.22.19
$ cd server && yarn dev
$ NODE_ENV=development nodemon --ignore documents --ignore vector-cache --trace-warnings index.js
[nodemon] 2.0.22
[nodemon] to restart at any time, enter rs
[nodemon] watching path(s): .
[nodemon] watching extensions: js,mjs,json
[nodemon] starting node --trace-warnings index.js
Example app listening on port 3001
SELECT FROM workspaces
SELECT FROM workspaces
SELECT FROM workspaces WHERE slug = 'kaboomski-studios'
SELECT FROM workspace_documents WHERE workspaceId = 1
SELECT FROM workspaces WHERE slug = 'kaboomski-studios'
SELECT FROM workspace_documents WHERE workspaceId = 1
SELECT FROM workspace_chats WHERE workspaceId = 1 AND include = true ORDER BY id ASC
SELECT FROM workspaces WHERE slug = 'kaboomski-studios'
SELECT FROM workspace_documents WHERE workspaceId = 1
Request failed with status code 429 Error: Request failed with status code 429
at createError (/home/$Username/Documents/Apps/AnythingLLM/anything-llm-master/server/node_modules/axios/lib/core/createError.js:16:15)
at settle (/home/$Username/Documents/Apps/AnythingLLM/anything-llm-master/server/node_modules/axios/lib/core/settle.js:17:12)
at IncomingMessage.handleStreamEnd (/home/$Username/Documents/Apps/AnythingLLM/anything-llm-master/server/node_modules/axios/lib/adapters/http.js:322:11)
at IncomingMessage.emit (node:events:523:35)
at endReadableNT (node:internal/streams/readable:1367:12)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, /',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.2.1',
Authorization: 'Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX',
'Content-Length': 21
},
method: 'post',
data: '{"input":"kaboomski"}',
url: 'https://api.openai.com/v1/moderations'
},
request: <ref 1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: true,
_last: false,
chunkedEncoding: false,
shouldKeepAlive: true,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 21,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: true,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 9,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
timeout: 5000,
parser: null,
_httpMessage: null,
autoSelectFamilyAttemptedAddresses: [Array],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: -1,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: Timeout {
_idleTimeout: 5000,
_idlePrev: [TimersList],
_idleNext: [Timeout],
_idleStart: 48441,
_onTimeout: [Function: bound ],
_timerArgs: undefined,
_repeat: null,
_destroyed: false,
[Symbol(refed)]: false,
[Symbol(kHasPrimitive)]: false,
[Symbol(asyncId)]: 370,
[Symbol(triggerId)]: 368
},
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 1,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/moderations HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
'Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\r\n' +
'Content-Length: 21\r\n' +
'Host: api.openai.com\r\n' +
'Connection: keep-alive\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype] {},
freeSockets: [Object: null prototype],
keepAliveMsecs: 1000,
keepAlive: true,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/moderations',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: null,
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 429,
statusMessage: 'Too Many Requests',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/moderations',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 26,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 21,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/moderations',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kHighWaterMark)]: 16384,
[Symbol(kRejectNonStandardBodyWrites)]: false,
[Symbol(kUniqueHeaders)]: null
}, response: { status: 429, statusText: 'Too Many Requests', headers: { date: 'Mon, 12 Jun 2023 06:50:46 GMT', 'content-type': 'application/json', 'content-length': '184', connection: 'keep-alive', 'openai-version': '2020-10-01', 'openai-organization': 'user-5yfvaypb8qudlcc5jrh5xkyv', 'x-request-id': '5350903f79f4d432f5cc25aa9ed55dc6', 'openai-processing-ms': '186', 'strict-transport-security': 'max-age=15724800; includeSubDomains', 'cf-cache-status': 'DYNAMIC', server: 'cloudflare', 'cf-ray': '7d6029543869290b-DEN', 'alt-svc': 'h3=":443"; ma=86400' }, config: { transitional: [Object], adapter: [Function: httpAdapter], transformRequest: [Array], transformResponse: [Array], timeout: 0, xsrfCookieName: 'XSRF-TOKEN', xsrfHeaderName: 'X-XSRF-TOKEN', maxContentLength: -1, maxBodyLength: -1, validateStatus: [Function: validateStatus], headers: [Object], method: 'post', data: '{"input":"kaboomski"}', url: 'https://api.openai.com/v1/moderations' }, request: <ref 1> ClientRequest { _events: [Object: null prototype], _eventsCount: 7, _maxListeners: undefined, outputData: [], outputSize: 0, writable: true, destroyed: true, _last: false, chunkedEncoding: false, shouldKeepAlive: true, maxRequestsOnConnectionReached: false, _defaultKeepAlive: true, useChunkedEncodingByDefault: true, sendDate: false, _removedConnection: false, _removedContLen: false, _removedTE: false, strictContentLength: false, _contentLength: 21, _hasBody: true, _trailer: '', finished: true, _headerSent: true, _closed: true, socket: [TLSSocket], _header: 'POST /v1/moderations HTTP/1.1\r\n' + 'Accept: application/json, text/plain, /*\r\n' + 'Content-Type: application/json\r\n' + 'User-Agent: OpenAI/NodeJS/3.2.1\r\n' + 'Authorization: Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\r\n' + 'Content-Length: 21\r\n' + 'Host: api.openai.com\r\n' + 'Connection: keep-alive\r\n' + '\r\n', _keepAliveTimeout: 0, _onPendingData: [Function: nop], agent: [Agent], socketPath: undefined, method: 'POST', maxHeaderSize: undefined, insecureHTTPParser: undefined, joinDuplicateHeaders: undefined, path: '/v1/moderations', _ended: true, res: [IncomingMessage], aborted: false, timeoutCb: null, upgradeOrConnect: false, parser: null, maxHeadersCount: null, reusedSocket: false, host: 'api.openai.com', protocol: 'https:', _redirectable: [Writable],
[Symbol(kBytesWritten)]: 0,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kHighWaterMark)]: 16384,
[Symbol(kRejectNonStandardBodyWrites)]: false,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON]
}
I asked Bard:
The error message "Request failed with status code 429" means that the request to the OpenAI API was rejected because you have exceeded the API's rate limit. The OpenAI API has a rate limit of 10 requests per second per user. This means that you can only make 10 requests per second to the OpenAI API. If you try to make more than 10 requests per second, your requests will be rejected with a status code of 429.
There are a few things you can do to avoid this error:
Slow down your requests. If you are making a lot of requests to the OpenAI API, try slowing down your requests. You can do this by using a library like axios or request that allows you to set a timeout on your requests. Use a caching library. A caching library can help you to reduce the number of requests you make to the OpenAI API. This is because the caching library can store the results of previous requests and return them if the same request is made again. Use a proxy. A proxy can help you to reduce the number of requests you make to the OpenAI API. This is because the proxy can handle the requests on your behalf and only send the requests to the OpenAI API when necessary. If you are still having trouble with this error, you can contact OpenAI support for help.`
Figured I had made a noob mistake myself with Axios or something, since I'm a bit rusty with JS coding... Thanks for helping me to solve! Kaboom.ski
Sorry for my silly fail here guys! Maybe it will help someone else in the future if they make the same epic fail.
The problem was completely my fault, nothing wrong with code at all in this case. In my case I had used an API key from an unpaid OpenAI account, which apparently then failed to retrieve data from OpenAI via Axios.
@kaboomskizzle you leaked a key up there, you may want to nuke that key now.
@kaboomskizzle you leaked a key up there, you may want to nuke that key now.
Thanks!
Not sure exactly how to resolve this. Tried a few things. Even the style doesn't seem to load properly, but I was able to create a workspace.
Console error on server:
yarn dev:server yarn dev:server yarn run v1.22.19 $ cd server && yarn dev $ NODE_ENV=development nodemon --ignore documents --ignore vector-cache --trace-warnings index.js [nodemon] 2.0.22 [nodemon] to restart at any time, enter
rs[nodemon] watching path(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting
node --trace-warnings index.jsExample app listening on port 3001 SELECT * FROM workspaces Failed getting project name. TypeError: fetch failed [PineconeError: Failed getting project name. TypeError: fetch failed] Failed getting project name. TypeError: fetch failed [PineconeError: Failed getting project name. TypeError: fetch failed] SELECT * FROM workspaces
Browser Error:
GET http://localhost:3001/system/system-vectors Status 500 Internal Server Error VersionHTTP/1.1 Transferred335 B (21 B size) Referrer Policystrict-origin-when-cross-origin
Tried a few workarounds but seems this is where I keep getting stuck, on Linux Mint OS.
Thanks for all your hard work! Kaboom.ski