Closed AndreMaz closed 2 months ago
@AndreMaz some context to be better able to debug this issue:
withLlamaIndex
is adding some parameters to be able to bundle transformers.js.
You can retrieve them from the source: https://github.com/run-llama/LlamaIndexTS/blob/feat/build-wasm-with-extism/packages/llamaindex/src/next.ts
The ones for transformer.js are taken from https://huggingface.co/docs/transformers.js/en/tutorials/next
Some ideas:
Hi @marcusschiesser thank you for the feedback.
- Have you tried using npm? I remember there was an issue with pnpm; that's why we added
No, I haven't because I'm relying on pnpm's features. But yeah, those lines allow me to execute pnpm build
(under the hood it's next build
) and pnpm run dev
(ie, next dev
)
However, the configs present in withLlamaIndex()
are not enough to run https://github.com/run-llama/LlamaIndexTS/tree/main/packages/llamaindex/e2e/examples/nextjs-node-runtime with @next/bundle-analyzer
Not entirely related to the issue above but still, do you have any idea or pointers about how to solve the Missing tiktoken_bg.wasm
issue when the app is deployed to vercel?
Error: Missing tiktoken_bg.wasm
at 50981 (/var/task/apps/web/.next/server/chunks/5348.js:40:137192)
at t (/var/task/apps/web/.next/server/webpack-runtime.js:1:143)
at 472 (/var/task/apps/web/.next/server/chunks/5348.js:42:9680)
at t (/var/task/apps/web/.next/server/webpack-runtime.js:1:143)
at 55963 (/var/task/apps/web/.next/server/chunks/5348.js:40:147021)
at t (/var/task/apps/web/.next/server/webpack-runtime.js:1:143)
at 55900 (/var/task/apps/web/.next/server/chunks/5348.js:46:112192)
at t (/var/task/apps/web/.next/server/webpack-runtime.js:1:143)
at 85879 (/var/task/apps/web/.next/server/app/api/chat/route.js:1:4919)
at t (/var/task/apps/web/.next/server/webpack-runtime.js:1:143) {
page: '/api/chat'
}
I don't use tiktoken for anything so I'm assuming that it must be a llamaindex
dependency.
I've tried this:
But so far no luck
Not entirely related to the issue above but still, do you have any idea pointers about how to solve the
Missing tiktoken_bg.wasm
issue when the app is deployed to vercel?
Are u using edge runtime?
@AndreMaz Missing tiktoken_bg.wasm
means that the WASM file from tiktoken is missing - that's actually handled by this line:
https://github.com/run-llama/LlamaIndexTS/blob/b622f498456f5c2d0497156724daf97cfdb4c767/packages/llamaindex/src/next.ts#L24
You're using withLlamaIndex
?
@himself65 I'm using nodejs runtime that according to docs
The Node.js runtime takes an entrypoint of a Node.js function, builds its dependencies (if any) and bundles them into a Serverless Function.
@marcusschiesser yep, I do. Here's how next.config.mjs
looks like after all the possible configs
import path from "path";
import { fileURLToPath } from "url";
import _jiti from "jiti";
import { withLlamaIndex } from "@web/chatbot/next";
const jiti = _jiti(fileURLToPath(import.meta.url));
// Import env files to validate at build time. Use jiti so we can load .ts files in here.
jiti("./src/env");
const isStaticExport = "false";
// Get __dirname equivalent for ES modules
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
/**
* @type {import("next").NextConfig}
*/
const nextConfig = {
basePath: process.env.NEXT_PUBLIC_BASE_PATH,
serverRuntimeConfig: {
PROJECT_ROOT: __dirname,
},
env: {
BUILD_STATIC_EXPORT: isStaticExport,
},
// Trailing slashes must be disabled for Next Auth callback endpoint to work
// https://stackoverflow.com/a/78348528
trailingSlash: false,
modularizeImports: {
"@mui/icons-material": {
transform: "@mui/icons-material/{{member}}",
},
"@mui/material": {
transform: "@mui/material/{{member}}",
},
"@mui/lab": {
transform: "@mui/lab/{{member}}",
},
},
webpack(config) {
config.module.rules.push({
test: /\.svg$/,
use: ["@svgr/webpack"],
});
// To allow chatbot to work
// Extracted from: https://github.com/neondatabase/examples/blob/main/ai/llamaindex/rag-nextjs/next.config.mjs
config.resolve.alias = {
...config.resolve.alias,
sharp$: false,
"onnxruntime-node$": false,
};
// From: https://github.com/dqbd/tiktoken?tab=readme-ov-file#nextjs
config.experiments = {
asyncWebAssembly: true,
layers: true,
};
// turn off static file serving of WASM files
// we need to let Webpack handle WASM import
// From: https://github.com/dqbd/tiktoken?tab=readme-ov-file#create-react-app
config.module.rules
.find((i) => "oneOf" in i)
.oneOf.find((i) => i.type === "asset/resource")
?.exclude.push(/\.wasm$/);
return config;
},
...(isStaticExport === "true" && {
output: "export",
}),
experimental: {
outputFileTracingIncludes: {
"/*": ["./cache/**/*"],
"/api/**/*": [
"node_modules/tiktoken/tiktoken_bg.wasm",
"node_modules/tiktoken/lite/tiktoken_bg.wasm",
"node_modules/tiktoken/tiktoken_bg.wasm?module",
"node_modules/tiktoken/lite/tiktoken_bg.wasm?module",
// From: https://github.com/run-llama/LlamaIndexTS/blob/b622f498456f5c2d0497156724daf97cfdb4c767/packages/llamaindex/src/next.ts#L24
"./node_modules/tiktoken/*.wasm",
],
},
},
/** Enables hot reloading for local packages without a build step */
transpilePackages: [
"@web/api",
"@web/auth",
"@web/db",
"@web/ui",
"@web/validators",
"@web/services",
"@web/utils",
"@web/logger",
"@web/certs",
"@web/chatbot",
],
/** We already do linting and typechecking as separate tasks in CI */
eslint: { ignoreDuringBuilds: true },
typescript: { ignoreBuildErrors: true },
};
// Add llamaindex defaults to nextConfig
const withLlamaIndexConfig = withLlamaIndex(nextConfig);
export default withLlamaIndexConfig;
Note: I've moved everything related to llamaindex
to a package @web/chatbot
. This is why even the withLlamaIndex
is being imported from @web/chatbot/next
Also, here's how my package.json
at @web/chatbot
looks like
{
"name": "@web/chatbot",
"private": true,
"version": "0.1.0",
"type": "module",
"exports": {
".": "./src/index.ts",
"./next": "./src/with-lama-index.mjs"
},
"license": "MIT",
"scripts": {
"clean": "rm -rf .turbo node_modules",
"format": "prettier --check . --ignore-path ../../.gitignore --ignore-path ../../.prettierignore",
"lint": "eslint .",
"typecheck": "tsc --emitDeclarationOnly"
},
"devDependencies": {
"@web/eslint-config": "workspace:*",
"@web/prettier-config": "workspace:*",
"@web/tsconfig": "workspace:*",
"@web/utils": "workspace:*",
"eslint": "catalog:",
"prettier": "catalog:",
"typescript": "catalog:"
},
"prettier": "@web/prettier-config",
"dependencies": {
"@web/logger": "workspace:*",
"@t3-oss/env-nextjs": "catalog:",
"js-tiktoken": "^1.0.14",
"llamaindex": "catalog:",
"pg": "^8.13.0",
"tiktoken": "^1.0.16"
}
}
how about add onnxruntime-node
to serverExternalPackages
@himself65 thank you! Adding the
serverComponentsExternalPackages: ["tiktoken", "onnxruntime-node"],
did the trick
Cool, let me add to our nextjs config
Hi I'm currently trying to debug the issues that I've been seeing lately (more context: https://github.com/run-llama/LlamaIndexTS/issues/1179).
I want to check what's being bundled into my app and if, for some weird reason, ESM and CJS versions are being included. To do it tried to install @next/bundle-analyzer but I'm getting an error.
Tried to run an example present in this repo the nextjs-node-runtime but got exactly the same error.
Here's the updated
next.config.mjs
and here are the error logs
Any idea how to solve this?