Our database is too huge. This issue is created for four tasks related to this problem.
1- sub-sampling .sql files
2- load sub-samples files into a SQL core to create tables
3- export two CSV files for each table. One (bigger) to train the model, and one (smaller) to put on GitHub for execution and test.
4- upload the smaller CSV files on GitHub
Our database is too huge. This issue is created for four tasks related to this problem.
1- sub-sampling .sql files 2- load sub-samples files into a SQL core to create tables 3- export two CSV files for each table. One (bigger) to train the model, and one (smaller) to put on GitHub for execution and test. 4- upload the smaller CSV files on GitHub