Closed sprappcom closed 1 month ago
Could you tell me the scenario? Why do you need so huge amount of tables: 10,000*1,000,000=10,000,000,000 How can you write SQL to access these huge number of tables? Maybe you can review and redesign the data model to use less tables.
i would like to have each user use each table.
what's the best way to do this?
Each user have an unique ID, then use ID to get user's data with extra user_ID. If rows are too large in one table, you can load balance to several user data tables with like user_ID%10 or other ways to distribute different users to different data tables. For memory used, you can do prototype and check the memory footprint.
given
1 million databases each with 10000 tables each with 100000 rows of 255 bytes rows in each table.
10000 databases with 1 million tables each with 100000 rows of 255 bytes rows in each table.
given scenario 1. switching between db shld be very slow given scenario 2. switching between tables shld be very slow.
either way, i'm assuming only, 100000 rows 1 table 255 bytes =, 25.5 megabytes mem is used for each access.
but which is better for high user-usage scenario and why?