Subtle additional green / gray dot decoder indicator:
Green = new information gained by the current animated QR frame
Gray = successfully decoded the current animated QR frame, but no new info learned
(no dot) = no QR of any kind detected in the current camera frame
Minor bugfix: Screensaver can kick in as soon as a very long scanning attempt is exited.
In-Depth description
Current progress metric just guesstimates that 1.75 * n animated frames will be needed for an animated QR that has n full frames' worth of data, capped at 99% until all data is read. Unfortunately the metric counts frames that don't add any new data, so for very long animated QRs, useless reads pile up and keep advancing the progress metric until it gets pinned to 99%.
But because of the nature of the mixed / XOR frames in the fountain encoding, there isn't a sensible way to quantify one's current progress. This PR's new metric counts each component in a mixed frame as partial progress, relative to its proportion in each mixed frame, summed across all mixed frames:
e.g.
We need to decode full frames A through Z.
We score a fully decoded frame as 1.0.
Mixed frame i is an XOR of full frames G, K, Q, and Z.
So we count 1/4 G, 1/4 K, 1/4 Q, and 1/4 Z towards our total progress.
Then we receive mixed frame j which is an XOR of full frames A, F, G, P, Y
So frame G's contribution to our progress is now 1/4 + 1/5.
It's possible to have G XORed in so many mixed frames that its summed fractional score might exceed 1.0, so we need to cap G's scoring value until we fully decode it. The cap selected in this PR -- 0.75 -- started from an arbitrary guess and then was tweaked based on how the progress percentage was reported in real-world use (oddly, larger values could result in the progress percentage occasionally decreasing(!!), though I wonder if that might have been more due to some threading artifact?).
This metric accelerates the reported progress in the early stages of decoding a long animated QR vs the original metric. But this metric predictably slows down toward the end as we receive less and less new information and instead are stuck waiting for just the right XOR frame to start being able to unlock the big collection of mixed frames we're holding in memory. So while we don't pin the reported progress at 99% for what could feel like (or actually be) forever, we do still experience a fair bit of agony at, say, 85% as progress begins to crawl.
Design by @easyuxd:
Note: no new screenshots added in this PR because the screenshot generator can't produce screenshots for live camera preview screens.
Overview
In-Depth description
Current progress metric just guesstimates that
1.75 * n
animated frames will be needed for an animated QR that hasn
full frames' worth of data, capped at 99% until all data is read. Unfortunately the metric counts frames that don't add any new data, so for very long animated QRs, useless reads pile up and keep advancing the progress metric until it gets pinned to 99%.But because of the nature of the mixed / XOR frames in the fountain encoding, there isn't a sensible way to quantify one's current progress. This PR's new metric counts each component in a mixed frame as partial progress, relative to its proportion in each mixed frame, summed across all mixed frames:
e.g. We need to decode full frames A through Z.
We score a fully decoded frame as 1.0.
Mixed frame
i
is an XOR of full frames G, K, Q, and Z.So we count 1/4 G, 1/4 K, 1/4 Q, and 1/4 Z towards our total progress.
Then we receive mixed frame
j
which is an XOR of full frames A, F, G, P, YSo frame G's contribution to our progress is now 1/4 + 1/5.
It's possible to have G XORed in so many mixed frames that its summed fractional score might exceed 1.0, so we need to cap G's scoring value until we fully decode it. The cap selected in this PR -- 0.75 -- started from an arbitrary guess and then was tweaked based on how the progress percentage was reported in real-world use (oddly, larger values could result in the progress percentage occasionally decreasing(!!), though I wonder if that might have been more due to some threading artifact?).
This metric accelerates the reported progress in the early stages of decoding a long animated QR vs the original metric. But this metric predictably slows down toward the end as we receive less and less new information and instead are stuck waiting for just the right XOR frame to start being able to unlock the big collection of mixed frames we're holding in memory. So while we don't pin the reported progress at 99% for what could feel like (or actually be) forever, we do still experience a fair bit of agony at, say, 85% as progress begins to crawl.
Design by @easyuxd:
Note: no new screenshots added in this PR because the screenshot generator can't produce screenshots for live camera preview screens.
Very large animated QR for testing
https://github.com/SeedSigner/seedsigner/assets/934746/96f3f99c-d720-4607-bfd9-60dca9a6fc27
This pull request is categorized as a:
Checklist
pytest
and made sure all unit tests pass before sumbitting the PRIf you modified or added functionality/workflow, did you add new unit tests?
Test suite cannot simulate animated QR scanning.
I have tested this PR on the following platforms/os: