MobileNativeFoundation / Store

A Kotlin Multiplatform library for building network-resilient applications
https://mobilenativefoundation.github.io/Store/
Apache License 2.0
3.18k stars 203 forks source link

[BUG] Cached(refresh = true) will collect SoT reader twice #572

Open sky-ricardocarrapico opened 1 year ago

sky-ricardocarrapico commented 1 year ago

Describe the bug I'm not completely sure this is a bug, but I can't possibly think of a reason for it to not be a bug, so I'd like to see some confirmation.

When requesting with StoreReadRequest.cached(key = key, refresh = true), SoT reader will get hit twice. This will effectively restart any DB observation and run queries again.

While on most situations this shouldn't be an issue, I have a very particular case where I'm leveraging the scope of the reader flow (using channelFlow) to do some operations while there is someone still collecting it. With this behavior of collecting twice, my scope is getting cancelled and so are the operations.

I really can't think of a reason for it to need to collect twice. I'd hope that when we start .stream, it would collect reader and just stay there.

Using StoreReadRequest.cached(key = key, refresh = false) works without issue.

Is it really a bug? Thanks!

To Reproduce See above.

Expected behavior With only one call to .stream, reader should only be collected once.

Screenshots N/A

Smartphone (please complete the following information):

Additional context N/A

digitalbuddha commented 1 year ago

I don't believe this was intentional. I'll take a look before next release. Happy to accept a PR if you want to investigate yourself.

kakai248 commented 1 year ago

So I investigated what's happening. Wrote a failing test here: https://github.com/MobileNativeFoundation/Store/commit/3f05bfaf890cddaef5d36ac036b2b6c9f978b289

This is all due to how SourceOfTruthWithBarrier works, by blocking reads when a write is happening. But this "block" stops collecting SoT reader and restarts after the write the done.

What is the reason we need this mechanism? Would a debounce work here instead? We keep collecting but discard all values while writing and when it stops we only let the last one pass.

digitalbuddha commented 1 year ago

Oh gosh this has been a few years. I think the thinking here is that we wanted to make the response origin of the next read as being from network rather than disk. To prevent races we disabled reading until after the write succeeded. I would imagine filtering reads out until after the right succeeeds would give similar functionality. Happy to accept a PR if you'd like to take a stab at it.

Not sure if @yigit remembers anything better