run-llama / chat-llamaindex

https://chat.llamaindex.ai
MIT License
849 stars 261 forks source link

[Bug] May I know why there is BadRequestError:400 while I run "npm run generate" #92

Open EuniceFoo533 opened 5 months ago

EuniceFoo533 commented 5 months ago

I get this output when I run npm run generate :

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 23869 tokens (23869 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. at APIError.generate (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/openai@4.37.1_encoding@0.1.13/node_modules/openai/error.mjs:41:20) at OpenAI.makeStatusError (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/openai@4.37.1_encoding@0.1.13/node_modules/openai/core.mjs:256:25) at OpenAI.makeRequest (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/openai@4.37.1_encoding@0.1.13/node_modules/openai/core.mjs:299:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async OpenAIEmbedding.getOpenAIEmbedding (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:82:26) at async OpenAIEmbedding.getTextEmbeddings (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:93:16) at async OpenAIEmbedding.getTextEmbeddingsBatch (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/embeddings/types.js:32:36) at async VectorStoreIndex.getNodeEmbeddingResults (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/indices/vectorStore/index.js:89:28) at async VectorStoreIndex.insertNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/indices/vectorStore/index.js:189:34) at async VectorStoreIndex.buildIndexFromNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/indices/vectorStore/index.js:109:9) at async VectorStoreIndex.init (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/indices/vectorStore/index.js:55:13) at async VectorStoreIndex.fromDocuments (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/llamaindex@0.1.18_@google+generative-ai@0.1.3_encoding@0.1.13_typescript@5.1.6/node_modules/llamaindex/dist/indices/vectorStore/index.js:132:16) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:37:5 at async getRuntime (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:22:3) at async generateDatasource (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:30:14) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:86:3

EuniceFoo533 commented 5 months ago

image