Reduces client-side bundle size and therefore improves page loading times.
You can use Suspense to render a skeleton / loader fallback, but I left that out for now.
Before:
Route (app) Size First Load JS
┌ ○ /_not-found 880 B 85.2 kB
├ λ /api/chat-with-gemini 0 B 0 B
├ ℇ /api/chat-with-openai-streaming 0 B 0 B
├ ℇ /api/chat-with-openai-streaming-helicone 0 B 0 B
├ λ /api/elevenlabs-tts 0 B 0 B
├ ℇ /api/get-image-description-openai 0 B 0 B
├ λ /api/modify-frontend-component 0 B 0 B
├ λ /api/stable-video-diffusion 0 B 0 B
├ ℇ /api/suggest-frontend-component 0 B 0 B
├ ℇ /api/use-openai-assistant 0 B 0 B
├ ○ /stacks 18.3 kB 117 kB
└ λ /stacks/[slug] 227 kB 326 kB
+ First Load JS shared by all 84.3 kB
├ chunks/472-a626cc720ae1c4fa.js 28.8 kB
├ chunks/fd9d1056-968b66a8cb9f62ca.js 53.3 kB
├ chunks/main-app-31cfe2391186e1e0.js 224 B
└ chunks/webpack-e0b6e128c20d1cab.js 1.99 kB
After:
Route (app) Size First Load JS
┌ ○ /_not-found 880 B 85.4 kB
├ λ /api/chat-with-gemini 0 B 0 B
├ ℇ /api/chat-with-openai-streaming 0 B 0 B
├ ℇ /api/chat-with-openai-streaming-helicone 0 B 0 B
├ λ /api/elevenlabs-tts 0 B 0 B
├ ℇ /api/get-image-description-openai 0 B 0 B
├ λ /api/modify-frontend-component 0 B 0 B
├ λ /api/stable-video-diffusion 0 B 0 B
├ ℇ /api/suggest-frontend-component 0 B 0 B
├ ℇ /api/use-openai-assistant 0 B 0 B
├ ○ /stacks 20 kB 113 kB
└ λ /stacks/[slug] 3.95 kB 96.7 kB
+ First Load JS shared by all 84.5 kB
├ chunks/472-39a181a2056d0aca.js 28.8 kB
├ chunks/fd9d1056-4ffd62f8769f0100.js 53.3 kB
├ chunks/main-app-31cfe2391186e1e0.js 224 B
└ chunks/webpack-751e473e5f3d35d2.js 2.14 kB
Reduces client-side bundle size and therefore improves page loading times.
You can use Suspense to render a skeleton / loader fallback, but I left that out for now.
Before:
After: