Closed mehmetaergun closed 7 years ago
This has been implemented with the latest release https://github.com/dataproofer/Dataproofer/releases/tag/v1.5.0. Please let us know if there are any additional issues in this area and I'd be happy to continue to improve our internal chunking system
Summary
It would be nice if Dataproofer came with basic built-in sql capabilities (e.g. sqlite) or other tools (e.g. split in bash) that could enable it to process large datasets in chunks of 10,000 rows or less (as per the warning displayed when a large dataset is loaded --which was a great idea btw thanks :) ) without further manual processing of the dataset by the user.
Relevant logs and/or screenshots
Possible fixes?
Thanks :)