filamentphp / filament

A collection of beautiful full-stack components for Laravel. The perfect starting point for your next app. Using Livewire, Alpine.js and Tailwind CSS.
https://filamentphp.com
MIT License
15.94k stars 2.54k forks source link

Large tables performance #9304

Open kayvanaarssen opened 8 months ago

kayvanaarssen commented 8 months ago

Package

filament/filament

Package Version

V3+

Laravel Version

N/A

Livewire Version

No response

PHP Version

PHP 8.2+

Problem description

When using bigger tables and using the table size of 25+ for example 50 or All (Yeah All) Than the loading times are huge. The issue is also in the demo: https://demo.filamentphp.com/shop/orders

Expected behavior

Since we have used other products and Vue.js / Laravel Backpack loading of tables was really quick with these kinds of datasets. Would be awesome if we can see some huge improvements here.

The issue is most notable with 50+ items in a CRUD and sure All will be very heavy. But a page size of 75 / 100 / 200 will sometimes with big datasets be a usecase.

Steps to reproduce

If you select All here at the bottom 1000 records will be loaded. You have to wait a bit, and after that's done and you refresh. The loading of the table will take about 14-18 seconds. Does not matter how powerful the server is, and also your browser page may crash...

Reproduction repository

https://github.com/filamentphp/filament

Relevant log output

No response

Donate 💰 to fund this issue

Fund with Polar

github-actions[bot] commented 8 months ago

Hey @kayvanaarssen! We're sorry to hear that you've hit this issue. 💛

However, it looks like you forgot to fill in the reproduction repository URL. Can you edit your original post and then we'll look at your issue?

We need a public GitHub repository which contains a Laravel app with the minimal amount of Filament code to reproduce the problem. Please do not link to your actual project, what we need instead is a minimal reproduction in a fresh project without any unnecessary code. This means it doesn't matter if your real project is private / confidential, since we want a link to a separate, isolated reproduction. That would allow us to download it and review your bug much easier, so it can be fixed quicker. Please make sure to include a database seeder with everything we need to set the app up quickly.

kayvanaarssen commented 8 months ago

It does not matter what project we link its already present in the base of Filament: https://github.com/filamentphp/filament

NazarAli commented 8 months ago

i have same problem 😞

kayvanaarssen commented 8 months ago

@zepfietje Can you maybe reopen this issue? Since its closed right after creation? Thanks, and will schedule a meeting with you for next week :-D

zepfietje commented 8 months ago

Yeah fair, this is reproducible in the demo indeed. Performance issues in large tables are known and likely caused by Livewire. We've already tried to optimize as much as possible though.

Will reopen this issue, but we may close it again if there's nothing we can do. Looping in @danharrin and @pxlrbt, since we've discusses this issue before.

If you have any ideas about approaches to improve performance, let us know, @kayvanaarssen!

kayvanaarssen commented 8 months ago

Thanks, talked about this already with our devs but no idea/ solution yet. Hope someone will be able to fix this / improve the speed.

binaryfire commented 8 months ago

I’m currently restricting the max results per page to 50 because of this. There are certain types of apps where viewing 100-200 records at a time would be desirable though. For example execution and monitoring logs in devops platforms.

zepfietje commented 8 months ago

I agree for certain apps it is important to view more than 100 records at once. Running into this myself for Picci too.

kayvanaarssen commented 8 months ago

But you think it has to do with livewire right? And not Filament itself? Maybe cross post in Livewire git as well?

zepfietje commented 8 months ago

Not sure if it really classifies as a Livewire bug though. Posting in Livewire repo only really adds value when we create a failing test PR for bugs to be honest.

Maybe this is something @danharrin could discuss with Caleb though?

kayvanaarssen commented 8 months ago

Hope someone will find the issue / fix it. Coming from Backpack loading 500+ records was never an issue😉 Glad its not on our side only. But also in the base.

kayvanaarssen commented 8 months ago

Makes sense with ecom you wil have many tables with order/products etc. And filters will also be essential. Altough filters speed is okay in smaller tables, don't know how that will be with 200 or even 500 lines and lets say 30k of records.

asefsoft commented 8 months ago

Alright, I've conducted some debugging to identify the root cause. I will share my findings to determine if they can contribute to performance improvement or not. I cloned the demo project, seeded it with data, and loaded a page with 150 records on each page on the Order page.

Upon investigation, it became apparent that the issue arises when Livewire is rendering the Order Page component within the update function.

It takes more than 10 seconds to fully render the component and generates approximately 3MB of data!

Then I hooked into the composing event and saw that to render Order page it is rendering more that 5000 views! I saw component names like this:

filament-tables::columns.text-column
filament-tables::components.cell
filament-tables::components.actions
filament-tables::components.actions.cell
filament-tables::components.cell
filament-tables::components.row
filament-tables::components.selection.checkbox
filament-tables::components.selection.cell
filament-tables::components.cell
filament-tables::components.columns.column

There are over 4000 filament-tables* rendering events, indicating a rendering process for each cell, row, and column.

you can review the entire rendering log that I have attached here: compose_events-2023-11-10.log

I believe there might be some room for optimization in rendering tables, so I've decided to share this information with you for further investigation and potential improvement.

In conclusion, there is a significant number of rendering processes, and the HTML output appears to be very large, contributing to an extended rendering time for pages with 150 records.

kayvanaarssen commented 8 months ago

@zepfietje Is this something you guys can fix in Filament or is this more related to Livewire? 4000+ Filament events is a lot, like we saw also with debugging.

pxlrbt commented 8 months ago

There are over 4000 filament-tables* rendering events, indicating a rendering process for each cell, row, and column.

I don't think that should be an issue. The views should be cached anyway.

It takes more than 10 seconds to fully render the component and generates approximately 3MB of data!

3MB sounds like a lot of data and a huge table! I just compared this to the Filament Demo's "Brands" resource. With 400 (!) entries, it still sends only 127kB data and takes 2.3s to load for me.

It's hard to judge your "benchmark" without knowing your setup. If your table really generates 3MB of data, there is not much Filament can do.

Do you have Debugbar enabled?

pxlrbt commented 8 months ago

Just tested this again with the OrderResource. Demo with up-to-date Filament. 400 Orders load in 3.6s and produce 195kB in data (Macbook Air M1).

kayvanaarssen commented 8 months ago

Just tested this again with the OrderResource. Demo with up-to-date Filament. 400 Orders load in 3.6s and produce 195kB in data (Macbook Air M1).

That's still a long wait time right for 400 lines... I mean, i get Filament has some overhead and cannot compare that to "Backpack" that we used to use, but over there 500+ Lines loaded within <1s

asefsoft commented 8 months ago

Hello Dennis I really dont know very much about internals of Filament and I just wanted to show that there is 4000 events and it might be problem or not. ofcource It need more investigation.

about taking 10 sec: it takes 10sec on my system with debug enable and i think it take half of it without it. I think anything above 2 sec or even 1 sec for 150 records is high.

and about size: in your demo site it produce 1.15MB for 50 records: image

also in demo site it produce 20MB in 9 sec for 1000 records

image

asefsoft commented 8 months ago

The strange thing is that the 20MB is compressed into 162KB :))) you can see it in Transferred col.

danharrin commented 8 months ago

It's compressed because the HTML to render table cells is repetitive

If we were to reduce the number of views, then we would be introducing repetition within our codebase, since we wouldn't be able to use Blade components

asefsoft commented 8 months ago

It's compressed because the HTML to render table cells is repetitive

Yes but strange part for me was the compression ratio which is over 99%, it became 125 times smaller. (wow) It seems everything in it is repetitive, If my calculation was not wrong!

danharrin commented 8 months ago

Yes, Tailwind is quite repetitive 😉

asefsoft commented 8 months ago

Yes that's amazing and its impact is huge. Okay, I want to delve deeper into the extensive response to see what I can find. Regardless of whether there is an easy way for optimization, I'll document it here. Perhaps I or someone else can come up with ideas and create a pull request for it, or share insights with our respected developers, Dan and Dennis.

I am currently examining a 150-record response weighing 3MB. After decoding it, I noticed some code snippets like this:

<!--[if ENDBLOCK]><![endif]--> 
<!--[if BLOCK]><![endif]-->

"It relates to Livewire Morphing . Could we consider making an exception for tables and disabling it for them? You won't believe that there were more than 12,000 of each of them! By removing them, the response size reduced by about 22%

Afterward, I discovered some repeated classes in the output HTML that somehow might exceptionally could be compact for Tables or maybe not!

155 input with this classes: fi-checkbox-input rounded border-none bg-white shadow-sm ring-1 transition duration-75 checked:ring-0 focus:ring-2 focus:ring-offset-0 disabled:pointer-events-none disabled:bg-gray-50 disabled:text-gray-50 disabled:checked:bg-current disabled:checked:text-gray-400 dark:bg-white/5 dark:disabled:bg-transparent dark:disabled:checked:bg-gray-600 text-primary-600 ring-gray-950/10 focus:ring-primary-600 checked:focus:ring-primary-500/50 dark:text-primary-500 dark:ring-white/20 dark:checked:bg-primary-500 dark:focus:ring-primary-500 dark:checked:focus:ring-primary-400/50 dark:disabled:ring-white/10

850 rows with this class fi-ta-text-item inline-flex items-center gap-1.5 text-sm text-gray-950 dark:text-white

1000+: flex w-full disabled:pointer-events-none justify-start text-start

1300+: fi-ta-cell p-0 first-of-type:ps-1 last-of-type:pe-1 sm:first-of-type:ps-3 sm:last-of-type:pe-3

1000+: flex w-full disabled:pointer-events-none justify-start text-start

150:

x-bind:class="{
            &#039;hidden&#039;: false &amp;&amp; isGroupCollapsed(&#039;&#039;),
            &#039;bg-gray-50 dark:bg-white/5&#039;: isRecordSelected(&#039;300&#039;),
            &#039;[&amp;&gt;*:first-child]:relative [&amp;&gt;*:first-child]:before:absolute [&amp;&gt;*:first-child]:before:start-0 [&amp;&gt;*:first-child]:before:inset-y-0 [&amp;&gt;*:first-child]:before:w-0.5 [&amp;&gt;*:first-child]:before:bg-primary-600 [&amp;&gt;*:first-child]:dark:before:bg-primary-500&#039;: isRecordSelected(&#039;300&#039;),

and there is more ...

pxlrbt commented 7 months ago

I don't think we should focus on the response size, since it's compressed anyway. It would be more interesting to see if we could improve rendering time by reducing classes etc. I don't think we can turn off Livewire internals for a specific part of the app.

iprastha commented 7 months ago

We have the same issue, tables with 500+ data always timed-out in nginx (504 gateway timeout), other pages with less data are quick.

rickycheers commented 7 months ago

TLDR; For anyone else running into this issue in dev mode, If you're using Laravel Debug Bar, try disabling it. (use this env variable DEBUGBAR_ENABLED=false)

For what is worth. I was having the same performance issue while attempting to render a table with barely 25 items per page but with more than 20 columns in it.

I noticed that the page would perform slightly better the less columns I added to the table. Then I started looking at the browser's profiler just to realize that the problem wasn't Filament or Livewire, per se... It was in fact caused by Larevel's debug bar.

Like someone else pointed out in earlier comments, many views get rendered in the back-end side when building the table. All of those rendering events are captured by the debug bar which causes the performance issue when it builds its UI.

kayvanaarssen commented 7 months ago

It also happens in Production and in the filament demo site. So than rhe debugbar is off.

rickycheers commented 7 months ago

@kayvanaarssen sorry, my reply wasn't directed specifically towards you, it is aiming anyone else stumbling upon this issue just to rule out that possibility. I'll update it to make it clearer.

kayvanaarssen commented 7 months ago

Understand, thought you were reffering to that the issue is only there when the debugbar is on😅

andrewdwallo commented 1 month ago

@danharrin @zepfietje I suggest deferring the loading of additional rows in a table until the user scrolls to the bottom of the viewport. This approach would enhance performance and user experience by loading more records only when necessary. I'm not sure if this has been considered as a solution, but it could be beneficial. Once the user reaches the bottom, more records can be loaded dynamically. However, I'm uncertain how this would impact existing pagination options. Perhaps this could be introduced as a separate feature for tables.

andrewdwallo commented 1 month ago

A similar solution instead of relying on the scroll position.

Screenshot 2024-05-22 at 1 50 49 PM
zepfietje commented 1 month ago

Would that really make a difference? Because I think DOM diffing is the main cause of performance issues?

danharrin commented 1 month ago

The real solution for rendering tables faster is probably using Alpine.js to render cells in a JS loop from JSON data instead of with Blade, but its a huge breaking change and would also limit the stuff you could do in a cell as you wouldn't have full access to Blade

andrewdwallo commented 1 month ago

Would that really make a difference? Because I think DOM diffing is the main cause of performance issues?

I understand the concern about DOM diffing potentially being the main cause of performance issues. However, I'd like to clarify the benefits of deferring the loading of rows until the user scrolls to the bottom (infinite scrolling or lazy loading):

  1. Reduced Initial Load Time: By loading only a small subset of rows initially, we significantly reduce the amount of data rendered at once. This should improve the initial load time and make the page more responsive.
  2. Incremental Data Loading: As users scroll, additional rows are loaded in smaller batches. This can help manage memory usage and render time more effectively, reducing the risk of browser crashes with large datasets.
  3. Improved User Experience: Users cannot currently view hundreds of records at once in the current viewport (only around 10-18 at once at 100% zoom on a 1920x1080 resolution), so there is no reason to load 25/50/100+ or all records at once. Loading data incrementally as they scroll can provide a smoother and faster experience. This ensures that the visible records load quickly without overwhelming the browser.

Regarding the impact on existing pagination options, infinite scrolling can be implemented alongside traditional pagination seamlessly. For example, when users choose to view 25/50/100+ records per page or all records, my suggestion can take action. This allows both methods to coexist effectively, enhancing performance while maintaining user preference for pagination.

I understand that integrating this feature might require significant changes and careful consideration of its impact on other functionalities. It may also be beneficial to explore combining this approach with Alpine.js or other frontend optimizations as mentioned.

Would you be open to exploring this further as a potential enhancement? I’m happy to collaborate on this and help test any solutions we come up with.

andrewdwallo commented 1 month ago

Based on the performance testing I conducted, I'd like to address some additional points and suggestions:

Metrics Summary

10 records per page

Load timings (ms)
Event Start Duration End
Redirect 0 0 0
DNS 0 0 0
Connect 0 0 0
Request 3 367 370
Response 370 1 371
DOM 371 135 506
Parse 371 37 408
Execute Scripts 408 13 421
Content loaded 421 20 441
Sub Resources 441 65 506
Load event 506 1 507
Total 507

25 records per page

Load timings (ms)
Event Start Duration End
Redirect 0 0 0
DNS 1 0 1
Connect 1 0 1
Request 8 563 571
Response 571 0 571
DOM 571 162 733
Parse 571 67 638
Execute Scripts 638 1 639
Content loaded 639 15 654
Sub Resources 654 79 733
Load event 733 1 734
Total 734

50 records per page

Load timings (ms)
Event Start Duration End
Redirect 0 0 0
DNS 1 0 1
Connect 1 0 1
Request 3 930 933
Response 933 0 933
DOM 933 192 1125
Parse 933 61 994
Execute Scripts 994 0 994
Content loaded 994 16 1010
Sub Resources 1010 115 1125
Load event 1125 1 1126
Total 1126

100 records per page

Load timings (ms)
Event Start Duration End
Redirect 0 0 0
DNS 0 0 0
Connect 0 0 0
Request 3 1258 1261
Response 1261 0 1261
DOM 1261 252 1513
Parse 1261 93 1354
Execute Scripts 1354 0 1354
Content loaded 1354 19 1373
Sub Resources 1373 140 1513
Load event 1513 1 1514
Total 1514

Key Findings

  1. Request Duration:
  1. DOM Processing:
  1. Sub Resource Loading:

Relevance to DOM Diffing and Alpine.js Suggestion

While the primary bottleneck appears to be server-side processing, the increase in DOM processing time suggests that the efficiency of DOM updates (including DOM diffing) is also a factor.

Regarding DOM Diffing:

Considering Alpine.js:

Recommendations

  1. Optimize Server-Side Processing:

    • Review and optimize database queries and server-side logic to handle large datasets more efficiently.
  2. Implement Lazy Loading/Infinite Scrolling:

    • Loading data incrementally can significantly reduce the initial load time and improve overall performance by distributing the load over time as the user scrolls. This can be implemented alongside traditional pagination, activating when users choose to view larger numbers of records per page.
  3. Efficient Client-Side Rendering:

    • Explore ways to reduce the complexity of the DOM generated for large datasets, such as using virtual DOM techniques or frontend frameworks like Alpine.js.
  4. Leverage Caching:

    • Implement caching strategies both on the server-side and client-side to reduce redundant processing and data retrieval.

Conclusion

The suggestion to use Alpine.js to render cells from JSON data is a valid long-term solution to improve rendering performance. However, as Dan mentioned, it would require careful consideration of the trade-offs, including the significant refactor required and the loss of Blade's flexibility. Combining this approach with server-side optimizations and lazy loading could provide a balanced solution to improve performance.

wychoong commented 1 month ago

Scroll loading and alpinejs seems increase the complexity. why not consider the new wire:replace https://github.com/livewire/livewire/pull/8403

mentamarindos commented 1 month ago

You guys as @wychoong mentions (Livewire v3.5.0) Adding wire:replace by @samlev in https://github.com/livewire/livewire/pull/8403
may solve this issue

kl4ver commented 1 week ago

You guys as @wychoong mentions (Livewire v3.5.0) Adding wire:replace by @samlev in livewire/livewire#8403 may solve this issue

Are there any plans to add this? I see it's got the label low priority, but I think this is for a lot of users a big issue.

wychoong commented 1 week ago

You guys as @wychoong mentions (Livewire v3.5.0) Adding wire:replace by @samlev in livewire/livewire#8403 may solve this issue

Are there any plans to add this? I see it's got the label low priority, but I think this is for a lot of users a big issue.

It’s open source project and filament is provided for free. For “a lot of users” and “big issue”, there seems to be not enough interest to provide a repo with huge data slow performance table, benchmarking setup, let alone funding the issue or submit a PR.

Regarding my suggestion of wire:replace, one should understand that it’s a relatively new api in LW and it might be optional enable in v3 or only available in v4

kl4ver commented 1 week ago

You guys as @wychoong mentions (Livewire v3.5.0) Adding wire:replace by @samlev in livewire/livewire#8403 may solve this issue

Are there any plans to add this? I see it's got the label low priority, but I think this is for a lot of users a big issue.

It’s open source project and filament is provided for free. For “a lot of users” and “big issue”, there seems to be not enough interest to provide a repo with huge data slow performance table, benchmarking setup, let alone funding the issue or submit a PR.

Regarding my suggestion of wire:replace, one should understand that it’s a relatively new api in LW and it might be optional enable in v3 or only available in v4

Thanks for the explanation, I thought I would be a small change. What you say about provide a demo, it's also a problem on https://demo.filamentphp.com/blog/categories, if you set the pagination to all (only 400 records), the page takes 4 seconds to load.

danharrin commented 1 week ago

The amount of HTML that needs to be loaded for those 400 records is substantial enough to slow the page down. In most projects, I would remove the "All" option and limit it to 50 per page maximum.

mihob commented 1 week ago

To be honest, it is quite slow even with 50 entries. 400-500ms is pretty much for a table with a few entries.

The problem does not only apply to tables, but also more or less runs through entire filament.

For example, if I have a form with a repeater field, the request time also increases unreasonably with each entry

kayvanaarssen commented 1 week ago

To be honest, it is quite slow even with 50 entries. 400-500ms is pretty much for a table with a few entries.

The problem does not only apply to tables, but also more or less runs through the entire filament.

For example, if I have a form with a repeater field, the request time also increases unreasonably with each entry

Totally agree on this. We experience the same symptoms.

danharrin commented 1 week ago

The more items you add to a repeater field, the more HTML is required to render that repeater

mihob commented 1 week ago

I am aware of that. But the difference with a few entries is quickly in the region of several 100ms.

If you were to render this with a few blade components, it would be much much faster.

Don't get me wrong. I really like filament, but I think performance is a problem that you can't just dismiss.

You very quickly reach request times of over 1s and you almost can't expect customers to accept that.

danharrin commented 1 week ago

We're working on optimisations involving the HTML size and partial rendering for version 4, but there are some limitations that cannot be overcome without compromising in other areas

shivaRamdeen commented 3 days ago

html is over 14mb. Is this expected? This is loading 200 rows on a table most columns are hidden. image