airbytehq / airbyte

The leading data integration platform for ETL / ELT data pipelines from APIs, databases & files to data warehouses, data lakes & data lakehouses. Both self-hosted and Cloud-hosted.
https://airbyte.com
Other
15.55k stars 4.01k forks source link

🐛 Source Amazon S3: solve possible case of files being missed during incremental syncs #5365

Closed Phlair closed 2 years ago

Phlair commented 3 years ago

Current Behavior

Based on Amazon S3's Consistency Model, we are protected against files changing as they are being synced, we should always get either old or new (not error or corrupt). This could mean more-than-once appearance of records but we won't miss data.

However, that consistency model coupled with the way S3 sets its last modified property (well explained in this stackoverflow post) it is a feasible possibility that a file is not available during a sync but upon availability has a last modified property earlier than our value in state. This means that on subsequent syncs, the file is available but we skip it due to the incremental filtering.

An example of how this could happen:

Ideal Behavior

Incremental syncs should never miss files. Any relevant file that is newly available should be synced.

Solution Ideas

Note: for retaining information in state (directly or via bloom filter), we could put a hard limit of X days of files to care about to avoid a forever increasing state size.

@subodh1810 thanks for pointing this potential issue out and discussing solutions.

Are you willing to submit a PR?

yes, opening issue to discuss approach.

sherifnada commented 2 years ago

@lazebnyi

Some questions about the cuckoo filter approach:

  1. What is the expected false-positive rate? and what is the impact on syncs/performance? what size would the state object become?
  2. how does this compare to something like keeping a cursor field value + the names of all files in the last 3 days? (presumably 3 days is a safe enough period) we could compress this list of files if that helps too
lazebnyi commented 2 years ago

@sherifnada Thanks for the questions.

I think the error rate will be near 0.000001. I think we can use this one realisation https://github.com/huydhn/cuckoo-filter. To sync performance shouldn't be affected. Size = capacity bucket size fingerprint.

For the cuckoo filter search is O(1) for list O(n), so I think for performance better use the cuckoo filter.

sherifnada commented 2 years ago

@lazebnyi what happens if we get a false positive? Do we miss data?

Could you say more about why this approach is better than keeping track of all files synced in the last N days, and using that list to determine if a file should be synced? that seems 100% deterministic and simpler to understand. At what scale does this become problematic?

We could even do an adaptive approach down the line where, if that object becomes too big, we transform it to a cuckoo filter or something

Phlair commented 2 years ago

my take, fwiw...

keeping track of all files synced in the last N days, and using that list to determine if a file should be synced

The idea to use a cuckoo filter is to do precisely this while being more efficient with storage.

We could even do an adaptive approach down the line where, if that object becomes too big, we transform it to a cuckoo filter or something

^ imo this sounds like a sensible approach. We should avoid introducing the complexity of a cuckoo filter unless we really have to and we might find out by doing the simpler approach first that there are other bottlenecks way before we hit state storage limitations.