Closed AhmedShehata98 closed 1 year ago
Maybe related to #51605
Please verify that your issue can be recreated with next@canary
.
please verify canary
label?We noticed the provided reproduction was using an older version of Next.js, instead of canary
.
The canary version of Next.js ships daily and includes all features and fixes that have not been released to the stable version yet. You can think of canary as a public beta. Some issues may already be fixed in the canary version, so please verify that your issue reproduces by running npm install next@canary
and test it in your project, using your reproduction steps.
If the issue does not reproduce with the canary
version, then it has already been fixed and this issue can be closed.
canary
?The safest way is to install next@canary
in your project and test it, but you can also search through closed Next.js issues for duplicates or check the Next.js releases. You can also use the GitHub templates (preferred) for App Router and Pages Router, or the CodeSandbox: App Router or CodeSandbox: Pages Router templates to create a reproduction with canary
from scratch.
canary
now?Next.js does not backport bug fixes to older versions of Next.js. Instead, we are trying to introduce only a minimal amount of breaking changes between major releases.
An issue with the please verify canary
that receives no meaningful activity (e.g. new comments that acknowledge verification against canary
) will be automatically closed and locked after 30 days.
If your issue has not been resolved in that time and it has been closed/locked, please open a new issue, with the required reproduction, using next@canary
.
Anyone experiencing the same issue is welcome to provide a minimal reproduction following the above steps. Furthermore, you can upvote the issue using the :+1: reaction on the topmost comment (please do not comment "I have the same issue" without reproduction steps). Then, we can sort issues by votes to prioritize.
We look into every Next.js issue and constantly monitor open issues for new comments.
However, sometimes we might miss one or two due to the popularity/high traffic of the repository. We apologize, and kindly ask you to refrain from tagging core maintainers, as that will usually not result in increased priority.
Upvoting issues to show your interest will help us prioritize and address them as quickly as possible. That said, every issue is important to us, and if an issue gets closed by accident, we encourage you to open a new one linking to the old issue and we will look into it.
This is not a new issue. This has have been happening since roughly 13.2.x However it seems to have gotten worse since 13.4.19. For me on windows, i'm getting now 1 crash per day on my development environment. In the past the next server would restart which was nice, but now i have to manually restart. Also it seems to be less of a problem on mac. If you search there might me some tips in similar topics. In general development on windows with Next 13 is far from optimal
I have the same problem, in my case is when I'm sending files from (5 - 20mb) to supabase storage.
const uploadResult = await storageClient
.from(STORAGE_BUCKET)
.upload(filePath, file, {
contentType: `video/${fileExt}`,
upsert: true
});
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11522:11) {
cause: Error: write EPIPE
at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:94:16)
at WriteWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -32,
code: 'EPIPE',
syscall: 'write'
}
}
Confirming that this has been driving me nuts lately.
really frustrating building in NextJs lately.
@balazsorban44 I'm using canary btw
"next": "^13.4.20-canary.15",
The problem seems getting worse when NextJs got rid of node-fetch (compiled) and exclusively used undici. It happened since v13.4.x.
What may also be useful is to provide a benchmark for the performance of NextJs API routes. Is there information about for example the rate limit of the API routes? Essentially, API is API.
Anyone saying "same issue", or not posting helpful comments to debug, please note that adding a reproduction link helps us verify/investigate more than anything. Stating your Next.js version in itself or an out-of-context code snippet is not very helpful. :pray: Please comment your reproduction repository instead, we would like to investigate!
I added the label please verify canary
, because the linked reproduction is on 13.3.1
which closes in on being half a year old version. We've introduced many improvements since then. The reproduction is also pretty complex and is not clear from the reproduction steps how to recreate the issue. @AhmedShehata98 do you have some specific steps you could share?
@cosieLq undici
is the de facto fetch
implementation that powers Node.js 18+ versions. In those cases, we don't polyfill at all, just rely on the underlying runtime, so in that case this would indicate an upstream bug. Do you have a codebase we could look at to verify what's the issue that you are seeing "getting worse"?
For anyone, upgrading Node.js might be a good start to verify if it's related or not. :+1:
Adding a link to a Git Project which can replicate the same issue, I tested it by using Node 16 and 18, Git Project, It may require some test supabase credentials to work but does the job to replicate the issue. Error:
"TypeError: fetch failed\n at Object.fetch (node:internal/deps/undici/undici:11576:11)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"
And another medium link where a similar issue has been explained and "tried" to fix https://medium.com/@kaloyan_17221/fix-vercel-next-js-fetch-failed-from-undici-polyfill-8c66346c9c2f
@balazsorban44 I don't have a codebase to prove that it is indeed "getting worse". It's just my guess.
I'm working on a project with NextJs v13.2.4. We were trying to upgrade it to v13.4.x. And then in the production environment, the "TypeError: fetch failed" showed up so often (~1 per minute) that we had to roll it back.
Then after a bit searching, I found these issues: #51605 #53353 , thinking that these issues are somehow related to our problem.
In our case, the error codes differ ("ECONNREFUSED" and "ECONNRESET"). They take place after calling on our NextJs API route.
I'm not sure if it's a (upstream) bug or not. It has something to do with load. If I call API route at high rate (with curl for example), the "TypeError: fetch failed" logs also appear with NextJs v13.2.4. But in production we haven't seen it so far as often as with v13.4.x.
That's why I'm also curious about the performance of NextJs API routes. Are there guidelines hereof? Because of these errors, we aren't able to upgrade NextJs and leverage the app dir.
A reproduction that works consistently for me:
Repo feature/steer-integration
pnpm i
export NEXT_PUBLIC_POOLS_API_V0_BASE_URL=https://pools-git-feature-steer-integration.sushi.com
pnpm exec turbo run dev --filter=evm
go to http://localhost:3000/pool/137:0x21988c9cfd08db3b5793c2c6782271dc94749251/smart
open the file
sushiswap/apps/evm/ui/pool/Steer/SteerLiquidityDistributionWidget/SteerLiquidityInRangeChip.tsx
change Range
on line 47 to Rangea
and back to Range
a couple times, it happens faster one time and slower the other
Might have something to do with a client component under a server component? Wild guess.
Noticed this line in the log a couple seconds before it started erroring out (both times I tried this repro):
evm:dev: - warn The server is running out of memory, restarting to free up memory.
I've got 16GB of memory.
Finally, the error itself:
evm:dev: TypeError: fetch failed
evm:dev: at Object.fetch (node:internal/deps/undici/undici:11576:11)
evm:dev: at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
evm:dev: at async invokeRequest (/home/lufy/sushi/sushiswap/node_modules/.pnpm/next@13.4.19_biqbaboplfbrettd7655fr4n2y/node_modules/next/dist/server/lib/server-ipc/invoke-request.js:17:12)
evm:dev: at async invokeRender (/home/lufy/sushi/sushiswap/node_modules/.pnpm/next@13.4.19_biqbaboplfbrettd7655fr4n2y/node_modules/next/dist/server/lib/router-server.js:254:29)
evm:dev: at async handleRequest (/home/lufy/sushi/sushiswap/node_modules/.pnpm/next@13.4.19_biqbaboplfbrettd7655fr4n2y/node_modules/next/dist/server/lib/router-server.js:447:24)
evm:dev: at async requestHandler (/home/lufy/sushi/sushiswap/node_modules/.pnpm/next@13.4.19_biqbaboplfbrettd7655fr4n2y/node_modules/next/dist/server/lib/router-server.js:464:13)
evm:dev: at async Server.<anonymous> (/home/lufy/sushi/sushiswap/node_modules/.pnpm/next@13.4.19_biqbaboplfbrettd7655fr4n2y/node_modules/next/dist/server/lib/start-server.js:117:13) {
evm:dev: cause: Error: connect ECONNREFUSED 127.0.0.1:41375
evm:dev: at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
evm:dev: errno: -111,
evm:dev: code: 'ECONNREFUSED',
evm:dev: syscall: 'connect',
evm:dev: address: '127.0.0.1',
evm:dev: port: 41375
evm:dev: }
evm:dev: }
I tried yesterday the last Canary version while using Node 18 and it worked.
For some reason my repository random stalled out and started having this issue on dev
. Haven't been able to get it to work since.
@balazsorban44 I wish I could provide more details, I basically found it difficult to work in dev mode. I had upgraded the next version from 13.4.10 to 13.4.19, and would consistently get the above error, and my dev environment became unbearably slow.
I downgraded to 13.4.13 and it's been stable since.
I might test out the canary version once I roll my current implementation out.
hopefully, someone else may have provided more context before then.
Hi,
Just wanted to chime in to say we're experiencing the same symptoms in our application. We recently upgraded next.js from 13.4.7 to 13.4.19. Node version 18.7.1.
We see steady increase in memory usage culminating in a The server is running out of memory, restarting to free up memory.
log message. After that the app starts returning an internal server error and logging out the same TypeError: fetch failed
stack trace as above over and over. Requires a restart to get the app running again.
I'm going to try rolling back next.js and let it run for a day or two.
This problem happens with next start
on the production. Ubuntu 20.20 Node.js v18.17.1
It happens randomly when our server is busy, once it happens next.js server can't continue to work => We have to restart the Next.js server.
Fastify servers work fine, but it just happens with Next.js servers.
P/S Downgrade to "next": "13.4.12"
works fine for me.
Updated from 13.4.10 to 13.4.19 and now get this error
This issue seems pretty similar:
This issue continues happened with the latest ^13.4.19 version with my m1 machine in development mode, it happens randomly.
For all the people who have been reporting that it happens with Next.js v13.4.19, please upgrade to the latest canary on the Releases page (currently 13.4.20-canary.23
) and try again - this will be helpful to see if the problem still occurs
(I am also experiencing this issue on the latest canary personally, but would be good to get some other data points)
I recently switched to version 13.4.20-canary.23 and encountered an unexpected type error that read "X TypeError [ERR_INVALID_STATE]: Invalid state: ReadableStream is already closed". I'm unsure of what caused this error, as there is no backend portion to my next.js project.
cross-posting in case it helps:
We also encountered similar 'out of memory', 'fetch' errors and eventual crashes (13.4.19).
Disabling image optimization solved this for us and server memory in production has become more stable.
Just add images: { unoptimized: true}
to next configuration.
Eventually we do want to benefit from image optimization, but at this point the memory footprint is simply too high.
For all the people who have been reporting that it happens with Next.js v13.4.19, please upgrade to the latest canary on the Releases page (currently
13.4.20-canary.23
) and try again - this will be helpful to see if the problem still occurs(I am also experiencing this issue on the latest canary personally, but would be good to get some other data points)
Hello @karlhorky , I have been facing the same issue with Next.js v13.4.19, (Node.js v18.16.0). it is happening frequently. It restarts in less than 5 minutes of starting the server. I am new to Next.js and How do I upgrade to the canary release?
For all the people who have been reporting that it happens with Next.js v13.4.19, please upgrade to the latest canary on the Releases page (currently
13.4.20-canary.23
) and try again - this will be helpful to see if the problem still occurs (I am also experiencing this issue on the latest canary personally, but would be good to get some other data points)Hello @karlhorky , I have been facing the same issue with Next.js v13.4.19, (Node.js v18.16.0). it is happening frequently. It restarts in less than 5 minutes of starting the server. I am new to Next.js and How do I upgrade to the canary release?
See here for updating to canary release. Or you can directly replace the version of next
e.g. "next": "13.4.20-canary.24",
in your package.json file then rerun the command npm i
to update.
For all the people who have been reporting that it happens with Next.js v13.4.19, please upgrade to the latest canary on the Releases page (currently
13.4.20-canary.23
) and try again - this will be helpful to see if the problem still occurs (I am also experiencing this issue on the latest canary personally, but would be good to get some other data points)Hello @karlhorky , I have been facing the same issue with Next.js v13.4.19, (Node.js v18.16.0). it is happening frequently. It restarts in less than 5 minutes of starting the server. I am new to Next.js and How do I upgrade to the canary release?
See here for updating to canary release. Or you can directly replace the version of
next
e.g."next": "13.4.20-canary.24",
in your package.json file then rerun the commandnpm i
to update.
Okay, Thank you @Elvincth
For anyone who is getting this issue, the workaround is to use next@13.4.12
. Another workaround and why this is happening is because fetch url protocol converted to https
on production with localhost. You can reproduce this in dev env using cloudflared
:
pnpm dev
one tabreq.nextUrl.origin
somewhere in middleware.ts
file. https://localhost:...
which is the cause of this error if you're using req.nextUrl
anywhere in server fetch.
So to solve this either next@13.4.12 or use http://${req.nextUrl.host}
as temporary workaround.Edit: With vercel hosting, it's working fine though.
I recently switched to version 13.4.20-canary.23 and encountered an unexpected type error that read "X TypeError [ERR_INVALID_STATE]: Invalid state: ReadableStream is already closed". I'm unsure of what caused this error, as there is no backend portion to my next.js project.
Same here, I'm having an issue with high memory consumption in a K8s environment using the app router where regardless of the amount of memory I set on a pod it always hits 95 ~ 97% memory usage, I tried disabling image optimization, removing sharp, upgrading node as recommended but only after upgrading next to 13.4.20-canary.23
the memory usage got better to 80 ~ 85% usage although it crashed several fetch
requests and unexpected errors like this one so I rollbacked to 13.4.19
no more errors appeared
"next": "13.4.20-canary.24"
updated to this version, haven't experienced another crash yet and dev reload is slightly faster.
here's my deps "dependencies": { "@akebifiky/remark-simple-plantuml": "^1.0.2", "@auth/firebase-adapter": "^1.0.0", "@auth/prisma-adapter": "^1.0.2", "@reduxjs/toolkit": "^1.9.5", "@types/node": "20.5.7", "@types/react": "18.2.21", "@types/react-dom": "18.2.7", "autoprefixer": "10.4.15", "eslint": "8.48.0", "eslint-config-next": "13.4.19", "firebase": "^10.3.1", "firebase-admin": "^11.10.1", "mermaid": "^10.4.0", "mermaid.cli": "^0.3.6", "next": "13.4.20-canary.24", "next-auth": "^4.23.1", "postcss": "8.4.29", "react": "18.2.0", "react-dom": "18.2.0", "react-icons": "^4.10.1", "react-markdown": "^8.0.7", "react-redux": "^8.1.2", "rehype-raw": "^7.0.0", "remark-gfm": "^3.0.1", "remark-mermaid": "^0.2.0", "remark-mermaid-plugin": "^1.0.2", "zod": "^3.22.2" }, "devDependencies": { "@tailwindcss/typography": "^0.5.10", "@typescript-eslint/parser": "^6.5.0", "daisyui": "^3.6.4", "eslint-config-prettier": "^9.0.0", "prettier": "3.0.3", "prettier-plugin-tailwindcss": "^0.5.4", "tailwind-scrollbar": "^3.0.5", "tailwindcss": "^3.3.3", "typescript": "^5.2.2" },
Albeit still experiencing port stuck, even after I close the server manually. In which case I need to kill port manually if I want to reuse port 3000
Seems identical to https://github.com/vercel/next.js/issues/49578. @b1rdex provided a reproduction here.
Another new memory error while using with next.js version 13.4.20-canary.26
<--- Last few GCs --->
[2248:0x138040000] 9308182 ms: Scavenge 3768.4 (3885.6) -> 3767.5 (3885.6) MB, 4.5 / 0.0 ms (average mu = 0.315, current mu = 0.313) allocation failure;
[2248:0x138040000] 9308191 ms: Scavenge 3768.7 (3885.6) -> 3768.0 (3885.6) MB, 5.6 / 0.0 ms (average mu = 0.315, current mu = 0.313) allocation failure;
[2248:0x138040000] 9309636 ms: Mark-sweep 3768.2 (3885.6) -> 3764.7 (3889.0) MB, 1444.8 / 0.0 ms (average mu = 0.191, current mu = 0.042) allocation failure; GC in old space requested
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x1023a8a88 node::Abort() [/Users//.nvm/versions/node/v18.16.1/bin/node]
2: 0x1023a8c78 node::ModifyCodeGenerationFromStrings(v8::Local<v8::Context>, v8::Local<v8::Value>, bool) [/Users//.nvm/versions/node/v18.16.1/bin/node]
3: 0x1024fe548 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users//.nvm/versions/node/v18.16.1/bin/node]
4: 0x1026a8e00 v8::internal::EmbedderStackStateScope::EmbedderStackStateScope(v8::internal::Heap*, v8::internal::EmbedderStackStateScope::Origin, cppgc::EmbedderStackState) [/Users//.nvm/versions/node/v18.16.1/bin/node]
5: 0x1026ac9ec v8::internal::Heap::CollectSharedGarbage(v8::internal::GarbageCollectionReason) [/Users//.nvm/versions/node/v18.16.1/bin/node]
6: 0x1026a9a00 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::internal::GarbageCollectionReason, char const*, v8::GCCallbackFlags) [/Users//.nvm/versions/node/v18.16.1/bin/node]
7: 0x1026a6d00 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users//.nvm/versions/node/v18.16.1/bin/node]
8: 0x10269b83c v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users//.nvm/versions/node/v18.16.1/bin/node]
9: 0x10268134c v8::internal::Factory::CodeBuilder::AllocateCode(bool) [/Users//.nvm/versions/node/v18.16.1/bin/node]
10: 0x102680c6c v8::internal::Factory::CodeBuilder::BuildInternal(bool) [/Users//.nvm/versions/node/v18.16.1/bin/node]
11: 0x10338455c v8::internal::compiler::CodeGenerator::FinalizeCode() [/Users//.nvm/versions/node/v18.16.1/bin/node]
12: 0x10356bc20 void v8::internal::compiler::PipelineImpl::Run<v8::internal::compiler::FinalizeCodePhase>() [/Users//.nvm/versions/node/v18.16.1/bin/node]
13: 0x1035629b0 v8::internal::compiler::PipelineImpl::FinalizeCode(bool) [/Users//.nvm/versions/node/v18.16.1/bin/node]
14: 0x103562820 v8::internal::compiler::PipelineCompilationJob::FinalizeJobImpl(v8::internal::Isolate*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
15: 0x1025b3f5c v8::internal::Compiler::FinalizeTurbofanCompilationJob(v8::internal::TurbofanCompilationJob*, v8::internal::Isolate*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
16: 0x1025d5524 v8::internal::OptimizingCompileDispatcher::InstallOptimizedFunctions() [/Users//.nvm/versions/node/v18.16.1/bin/node]
17: 0x102653910 v8::internal::StackGuard::HandleInterrupts() [/Users//.nvm/versions/node/v18.16.1/bin/node]
18: 0x102a21db4 v8::internal::Runtime_StackGuard(int, unsigned long*, v8::internal::Isolate*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
19: 0x102d7104c Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/Users//.nvm/versions/node/v18.16.1/bin/node]
20: 0x102d481c4 Builtins_MapPrototypeSet [/Users//.nvm/versions/node/v18.16.1/bin/node]
21: 0x10f11f360
22: 0x10f11b620
23: 0x108852684
24: 0x10f11f48c
25: 0x10f21efe4
26: 0x10f11f48c
27: 0x10f11b620
28: 0x109770370
29: 0x10f11f48c
30: 0x10f21e318
31: 0x10977acfc
32: 0x10976d054
33: 0x107db0ebc
34: 0x10f5968b8
35: 0x10f59690c
36: 0x1095540b4
37: 0x10976d054
38: 0x107db0ebc
39: 0x1097779b4
40: 0x102d2def4 Builtins_AsyncFunctionAwaitResolveClosure [/Users//.nvm/versions/node/v18.16.1/bin/node]
41: 0x102dbc738 Builtins_PromiseFulfillReactionJob [/Users//.nvm/versions/node/v18.16.1/bin/node]
42: 0x102d1fc4c Builtins_RunMicrotasks [/Users//.nvm/versions/node/v18.16.1/bin/node]
43: 0x102cfa3a4 Builtins_JSRunMicrotasksEntry [/Users//.nvm/versions/node/v18.16.1/bin/node]
44: 0x10262ac4c v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users//.nvm/versions/node/v18.16.1/bin/node]
45: 0x10262b13c v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users//.nvm/versions/node/v18.16.1/bin/node]
46: 0x10262b318 v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*, v8::internal::MaybeHandle<v8::internal::Object>*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
47: 0x102651a80 v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
48: 0x10265221c v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
49: 0x1022f0c4c node::InternalCallbackScope::Close() [/Users//.nvm/versions/node/v18.16.1/bin/node]
50: 0x1022f0fd0 node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) [/Users//.nvm/versions/node/v18.16.1/bin/node]
51: 0x102305c58 node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
52: 0x1023ad8c8 node::fs::FSReqCallback::Reject(v8::Local<v8::Value>) [/Users//.nvm/versions/node/v18.16.1/bin/node]
53: 0x1023ae154 node::fs::FSReqAfterScope::Reject(uv_fs_s*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
54: 0x1023ae388 node::fs::AfterNoArgs(uv_fs_s*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
55: 0x1023a49f8 node::MakeLibuvRequestCallback<uv_fs_s, void (*)(uv_fs_s*)>::Wrapper(uv_fs_s*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
56: 0x102cd72a0 uv__work_done [/Users//.nvm/versions/node/v18.16.1/bin/node]
57: 0x102cdaa5c uv__async_io [/Users//.nvm/versions/node/v18.16.1/bin/node]
58: 0x102ced010 uv__io_poll [/Users//.nvm/versions/node/v18.16.1/bin/node]
59: 0x102cdaf2c uv_run [/Users//.nvm/versions/node/v18.16.1/bin/node]
60: 0x1022f16e0 node::SpinEventLoop(node::Environment*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
61: 0x1023e5dd4 node::NodeMainInstance::Run() [/Users//.nvm/versions/node/v18.16.1/bin/node]
62: 0x102375ab0 node::LoadSnapshotDataAndRun(node::SnapshotData const**, node::InitializationResult const*) [/Users//.nvm/versions/node/v18.16.1/bin/node]
63: 0x102375d68 node::Start(int, char**) [/Users//.nvm/versions/node/v18.16.1/bin/node]
64: 0x192413f28 start [/usr/lib/dyld]
@Elvincth that seems unrelated to this fetch
error that this issue is about, but in your stack trace it's showing Node.js v18.16.1 - there's a memory leak before Node.js v18.17.1, so you should upgrade :)
Note we saw this on 13.4.19 (when we upgraded from 13.3.4). Downgrading to 13.4.13 has the same issue for us actually. Currently trying 13.4.10.
Seems like it takes a few hours before it begins to spam that fetch error in the logs and return internal service...
I have a similar issue. I'm building a front-end project in Next.js and the backend is in Django. When I send a request from a client component I get fetch failed
and When I move the fetch to server action or route handler it works fine.
Next version: 13.4.19
@k2xl That is the similar experience for us. The thing that makes it extremely difficult to diagnose is that I can spam requests on my local run instance (running prod settings), and it never leaks an ounce of memory, but I also don't see these fetch failed
, they only appear after the server has been running in production while under significant load. They appear to be coming from some sort of internal subsystem but the stack trace is utterly useless. I've analyzed my application thoroughly and that fetch
which is erroring is simply not from a fetch in our code, it's coming from within Next. If we knew where, then it would be easier to try and track down the memory leak.
I have solved this problem for myself:
The problem happens, when I send File to Supabase Storage.
To solve the problem, I send Array Buffer:
let array_buffer = await image.arrayBuffer();
This problem happens with
next start
on the production. Ubuntu 20.20 Node.js v18.17.1 It happens randomly when our server is busy, once it happens next.js server can't continue to work => We have to restart the Next.js server.Fastify servers work fine, but it just happens with Next.js servers.
P/S Downgrade to
"next": "13.4.12"
works fine for me.
I updated next.js to 13.4.20-canary.33
, and it looks like the problem was fixed.
@meotimdihia This issue is really weird. We do not have this problem after downgrading to Next 13.4.12 version. Some people recommend downgrading to 13.4.7.
The cause of the issue seems to be the undici issue in the link below. https://github.com/nodejs/undici/issues/1602 It doesn't seem to have been resolved yet, but I'm glad to hear that the problem didn't recur in Next 13.4.20.
I updated next.js to
13.4.20-canary.33
, and it looks like the problem was fixed.
I upgraded to next@13.4.20-canary.35
and we have also not experienced the fetch
errors / crashing since then... 👀 🤔
I will keep an eye on this, but maybe this has been fixed
Edit: 2 weeks later, and haven't seen any fetch
-related crashes
Original Post: Can confirm we are having the same issue, ever since upgrading above 13.4.12
Dev server crashes probably every 10-15 saves.
I sent a reproduction to balazsorban44 a couple weeks ago, for this and another reoccurring error ( Invalid state: ReadableStream is already closed) and he was not able to reproduce the errors on his end.
After some digging on my end, I found that both issues only seem to occur when using a chromium-based browser to access the application. While not a practical solution, using firefox seems to avoid both of these errors for us.
Still seeing some random TypeError: fetch failed
in v13.5.1
(on Vercel builds only).
Same prob with v13.5.1, going back to 13.4.12 :(
updated to v13.5.1 a couple of hours ago, and npm run build fails consistently with issues. For context I use the pages folder not the app folder
Rolling back to 13.4.12
Error: NextRouter was not mounted. https://nextjs.org/docs/messages/next-router-not-mounted at p (/app/.next/server/chunks/4914.js:1:14391) at f (/app/.next/server/chunks/7648.js:1:487) at renderWithHooks (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5658:16) at renderIndeterminateComponent (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5731:15) at renderElement (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5946:7) at renderNodeDestructiveImpl (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6104:11) at renderNodeDestructive (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6076:14) at renderIndeterminateComponent (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5785:7) at renderElement (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5946:7) at renderNodeDestructiveImpl (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6104:11)
Error occurred prerendering page "/superadmin". Read more: https://nextjs.org/docs/messages/prerender-error Error: NextRouter was not mounted. https://nextjs.org/docs/messages/next-router-not-mounted at p (/app/.next/server/chunks/4914.js:1:14391) at f (/app/.next/server/chunks/7648.js:1:487) at renderWithHooks (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5658:16) at renderIndeterminateComponent (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5731:15) at renderElement (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5946:7) at renderNodeDestructiveImpl (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6104:11) at renderNodeDestructive (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:6076:14) at renderIndeterminateComponent (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5785:7) at renderElement (/app/node_modules/react-dom/cjs/react-dom-server.browser.development.js:5946:7) ✓ Generating static pages (51/51)
@harrisyn I think your problem is not related to this thread.
@harrisyn I think your problem is not related to this thread.
maybe not directly, however I only updated to v13.5.1 instead of canary to get rid of this fetch failed issue.
@harrisyn better to either create a new issue or check for existing issues, the one below sounds similar to yours:
I'm getting this same error after upgrading to 13.5.3
from 13.5.2
. I'm getting the following error:
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11576:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
cause: Error: self-signed certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1600:34)
at TLSSocket.emit (node:events:517:28)
at TLSSocket._finishInit (node:_tls_wrap:1017:8)
at ssl.onhandshakedone (node:_tls_wrap:803:12)
at TLSWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
code: 'DEPTH_ZERO_SELF_SIGNED_CERT'
}
}
Our solution proxies calls to an external https API server (that's run locally in the dev environment).
We were getting a similar error in the past and our investigation pointed to extending Node's local certificates by using the NODE_EXTRA_CA_CERTS
environment variable with the path to a self-signed certificate. After setting this variable the error stopped happening but now it has returned.
Note: if we downgrade to Next 13.5.2
we don't get any errors. The only difference in package.json
between the working and the failing solution is Next's version.
Thanks in advance for any guidance on this.
EDIT: I tested this with next@canary
and the error continues to happen. The latest version with which it does not happen is 13.5.2
Also getting this. First on 13.4.19
, upgraded to 13.5.3
, but still getting it.
The following two errors alternate in my console:
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11576:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async imageOptimizer (/Users/.../node_modules/next/dist/server/image-optimizer.js:521:29)
at async cacheEntry.imageResponseCache.get.incrementalCache (/Users/.../node_modules/next/dist/server/next-server.js:519:61)
at async /Users/.../node_modules/next/dist/server/response-cache/index.js:102:36 {
cause: Error: connect ENETUNREACH 64:ff9b::34da:19d3:443
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16)
at TCPConnectWrap.callbackTrampoline (node:internal/async_hooks:130:17) {
errno: -51,
code: 'ENETUNREACH',
syscall: 'connect',
address: '64:ff9b::34da:19d3',
port: 443
}
}
TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11576:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async imageOptimizer (/Users/.../node_modules/next/dist/server/image-optimizer.js:521:29)
at async cacheEntry.imageResponseCache.get.incrementalCache (/Users/.../node_modules/next/dist/server/next-server.js:519:61)
at async /Users/.../node_modules/next/dist/server/response-cache/index.js:102:36 {
cause: ConnectTimeoutError: Connect Timeout Error
at onConnectTimeout (/Users/.../node_modules/next/dist/compiled/undici/index.js:1:92227)
at /Users/.../node_modules/next/dist/compiled/undici/index.js:1:91719
at Immediate._onImmediate (/Users/.../node_modules/next/dist/compiled/undici/index.js:1:92109)
at process.processImmediate (node:internal/timers:476:21)
at process.callbackTrampoline (node:internal/async_hooks:130:17) {
code: 'UND_ERR_CONNECT_TIMEOUT'
}
}
Only on next/image
, no errors when just using img
tags (obviously ... as those don't use imageOptimizer
).
I temporarily resolved it by just skipping the default image loader, but that is not the long term solution. I want the default loader for it's optimisations haha
// next.config.js
...
images: {
loader: 'custom',
loaderFile: './bin/imageLoader.js',
},
...
// imageLoader.js
'use client';
export default function imageLoader({ src, width, quality }) {
if (src.includes('?')) {
return `${src}&w=${width}&q=${quality || 75}`;
}
return `${src}?w=${width}&q=${quality || 75}`;
}
@GoudekettingRM Downgrade to 13.5.2 => It should be fine. 13.5.3 introduced a new bug with fetch - I think.
@meotimdihia
Thanks for the quick reply, same issues on 13.5.2 unfortunately...
Still the same on 13.5.3.
Revert to 13.4.x works around it, but it had a severe bug pertaining to locales that was fixed only a few days ago, iirc.
So, it's a matter of choose the lesser evil, at least for the moment 😟
Link to the code that reproduces this issue or a replay of the bug
https://github.com/AhmedShehata98/shoperz
To Reproduce
TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async invokeRequest (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\server-ipc\invoke-request.js:17:12) at async invokeRender (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:254:29) at async handleRequest (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:447:24) at async requestHandler (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:464:13) at async Server. (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\start-server.js:117:13) {
cause: Error: connect ECONNREFUSED ::1:58392
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 58392
}
}
Current vs. Expected behavior
i followed the server start instractions by " npm run dev " I expected its works fine without error but got error TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11576:11) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async invokeRequest (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\server-ipc\invoke-request.js:17:12) at async invokeRender (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:254:29) at async handleRequest (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:447:24) at async requestHandler (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\router-server.js:464:13) at async Server. (D:\my-work\Progo-soft\libya-zon\node_modules\next\dist\server\lib\start-server.js:117:13) {
cause: Error: connect ECONNREFUSED ::1:58392
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1495:16) {
errno: -4078,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '::1',
port: 58392
}
}
Verify canary release
Provide environment information
Which area(s) are affected? (Select all that apply)
Not sure
Additional context
No response