Open Mandaryn opened 9 years ago
Wondering about this lately
@Mandaryn why not use records updated_at
as local per-record synced_at
value?
Would have to drop synced_all_at
naming.
Would also fix https://github.com/BookingSync/synced/issues/40
would be better, definately, but still updated_at is just the timestamp set on ruby when building object in code, and not the actual time the transaction was commited and the record is visible to others
but still should be much safer than the current solution
Sounds like we need to add a timestamp to a response from apiv3.
timestamp, this won't solve the problem either
@Mandaryn Why? The timestamp in response would be the most accurate thing here, if we return timestamp from the moment before making query for records it should always be right, no?
I thought that the main problem here is in the possible difference of client and server time. If we use only the time from server, then it should be ok?
right but still we can get records added after this timestamp that were updated before the timestamp was recorded and lagged in some longer transaction
are you sure that the updated_at
is not set on commit?
don't think so but feel free to prove me wrong
Got lost here, even if some longer transaction happens, we would have timestamp from server that would be used to assign synced_all_at
value, updated_at
would really have nothing to do with it.
hah, you're right with the server timestamp, but this is wrong on another level, what would happen with paginated syncs?
this is another level of madness
We could use timestamp from the first page.
and accept we would resync some records twice if they were added on later pages, makes sense
Possible resyncing is not a big deal I think, shouldn't happen that often and even if it does it's not a problem.
Sounds good, so we favor server timestamp of the first get
request over each records updated_at
?
updated_at
from core ofc.
yes
Think we can use an existing header then.
date: "Thu, 26 Nov 2015 11:34:48 GMT"
don't think so, didn't check the code but this one would probably be added at the end of the request, probably in some serializer/responder, which might be late. I would say we should have a timestamp from the beginning of the request, to be sure everything updated in the mean time would also catch up for the next sync. makes sense? consider request with includes that could take 1 second to finish, its either we have 1 second gap or not, think it could make a difference
Just tested this and you are correct, its applied at the end of the request.
right now this functionality has a timing window, changes that appear between the data is collected at bs_core and the Time.zone.now called at client after syncing will be ignored by partial updates, as the next sync will only consider data updated after the synced_all_at
either it should be clearly stated in the docs that partial updates are not 100% accurate and need periodical resyncing, or the timestamp for this should be calculated from the collected data, either as the max update of the synced data or the time of the data was read at bs_core