addyosmani / chatty

ChattyUI - your private AI chat for running LLMs in the browser
https://chattyui.com
MIT License
443 stars 32 forks source link

fix: large initial bundle size #37

Open jakobhoeg opened 1 month ago

jakobhoeg commented 1 month ago

The way we are currently handling instantiating the web-llm-helper class is suboptimal. It causes a large initial bundle size when the app is first rendernes as seen on the image below.

(Using next/bundle-analyzer): image

For now, this is fine as we're focusing on the desktop version for the current release. However, ideally, we would want this to be handled only when needed and not at initial render to optimize the bundle size.

7H3XXX commented 3 weeks ago

@jakobhoeg Can you give more context, especially on the "handled only when needed" part ? I went through the code and thought since this is the model instance, then is it not normal that we call it on the initial render ? Or do you mean we should instantiate the class if a user enter a prompt and the model has been downloaded successfully?