Closed Isarien closed 10 months ago
Hi @Isarien, You can indeed use the data files directly. In that way can do the following in Big Query (not familiar with that).
Then you can save the dataset to your solution. That is also the same what the notebook does. So you don't have to look at the BC documentation.
Hope this will make it clear for you.
@Bertverbeek4PS Thank you for your answer. It's clear.
Hello Everyone,
First thank you for the work done on this project. It very usefull.
Then my question :
My company has multiple business central servers managed by a software integrator. To feed our datalake hosted in Big Query, they advised us to use the ADLSE plugin. We directly use delta files as input to Big Query (aggregated into a raw table), and with DBT, we generate snapshots of the tables.
I understand that deleted rows correspond to rows with a null "systemCreatedAt" value. However, for data updates, I need clarification. Should I rely solely on the fields systemId, Company, and systemModifiedAt (keeping only the latest updated data like writed in spark code here), or should I consider the concept of keys defined in the source code that define the business central tables (especially the keys with the clustered attribute) ?
I ask this question because I found this information in the BC documentation :