This PR constitutes a major rewrite of much of the Bulldozer application. The over arching philosophies behind this rewrite are:
Move from a row by row to bulk processing approach.
Restructure csv files to be more compatible with RockRMS Slingshot extraction packages.
Bulk Processing:
Historically, Bulldozer has functioned by processing data in realtime as it iterates through every row of every csv file. This has worked reliably but at a performance cost. With this rewrite, Bulldozer uses a newer csv processing library to import entire csv files directly into Lists of c# classes, making the data much more efficient to work with. From there, new "import" classes are used as a sort of interface class to quickly translate csv data into objects ready to interact directly with Rock entities. This allows Bulldozer to take advantage of Rock's built in bulk processing framework, much like the core Bulk Import block does. This results in a significant performance improvement for migrations.
CSV File Restructure:
Bulldozer has always worked from a strategic data structure based on csv columns. With this rewrite, we decided to align our csv structures with those RockRMS has already defined for the Bulk Import tool. The idea is to make Bulldozer a little more user friendly if the csv data is coming from a standardized Slingshot package from another system. This has resulted requiring more csv files than previously, but we feel that added element is worth the benefit of being able to process slingshot packages. That being said, as of this update, no Slingshot generated csv have been directly tested at this time, so we cannot yet guarantee they will work without any manipulation. That is the goal, though.
Work in Progress:
As you may imagine, this is a large undertaking. It will take several iterations to touch everything we want to. Though most of the major data elements have been enhanced, there are still many that have not. They still function as they always have. We will continue to enhance the rest of the pieces we are targeting as we have time and resources to do so.
New data elements:
Businesses
Business Contacts
Fundraising Groups (including financial transaction connections)
Version 2 Release to Master
See new documentation here: https://github.com/KingdomFirst/Bulldozer/wiki/Bulldozer-V2-CSV
Description
What does the change add or fix?
This PR constitutes a major rewrite of much of the Bulldozer application. The over arching philosophies behind this rewrite are:
Bulk Processing: Historically, Bulldozer has functioned by processing data in realtime as it iterates through every row of every csv file. This has worked reliably but at a performance cost. With this rewrite, Bulldozer uses a newer csv processing library to import entire csv files directly into Lists of c# classes, making the data much more efficient to work with. From there, new "import" classes are used as a sort of interface class to quickly translate csv data into objects ready to interact directly with Rock entities. This allows Bulldozer to take advantage of Rock's built in bulk processing framework, much like the core Bulk Import block does. This results in a significant performance improvement for migrations.
CSV File Restructure: Bulldozer has always worked from a strategic data structure based on csv columns. With this rewrite, we decided to align our csv structures with those RockRMS has already defined for the Bulk Import tool. The idea is to make Bulldozer a little more user friendly if the csv data is coming from a standardized Slingshot package from another system. This has resulted requiring more csv files than previously, but we feel that added element is worth the benefit of being able to process slingshot packages. That being said, as of this update, no Slingshot generated csv have been directly tested at this time, so we cannot yet guarantee they will work without any manipulation. That is the goal, though.
Work in Progress: As you may imagine, this is a large undertaking. It will take several iterations to touch everything we want to. Though most of the major data elements have been enhanced, there are still many that have not. They still function as they always have. We will continue to enhance the rest of the pieces we are targeting as we have time and resources to do so.
New data elements:
Optimized data elements:
Non-Optimized data elements: