Virtual scrolling would never run out of resources when used with extremely large datasets
How will success be measured?
When using an extremely large dataset (100k ,1M rows ...), you should be able to continue scrolling without overtaxing the browser resources.
Additional information
The way virtual scrolling works today is via 'bucket growth'. The fetch callback is called when it needs to grow the number of available rows. But it is a continual growth. If the dataset is extremely large, this will not be sustainable. Supporting a sliding window approach would have the DataGrid asking the application to provide specific window of data based on the users scrolling direction. The application would obtain and provide that data to the DataGrid
What will this achieve?
Virtual scrolling would never run out of resources when used with extremely large datasets
How will success be measured?
When using an extremely large dataset (100k ,1M rows ...), you should be able to continue scrolling without overtaxing the browser resources.
Additional information
The way virtual scrolling works today is via 'bucket growth'. The fetch callback is called when it needs to grow the number of available rows. But it is a continual growth. If the dataset is extremely large, this will not be sustainable. Supporting a sliding window approach would have the DataGrid asking the application to provide specific window of data based on the users scrolling direction. The application would obtain and provide that data to the DataGrid