Open skef opened 5 days ago
Let's think about two per-patch measures:
- StuffYouNeed: How many of the codepoints and features needed are loaded by the patch. For now I'm happy to think about this as the simple sum of the two, meaning that the client can easily calculate this using some set operations. (This is setting aside the nascent UVS-motivated mechanism.)
- TotalStuff: "How much" is loaded by the patch. This is impractical for the client to measure as things stand.
If you could measure TotalStuff, it seems like a good criterion would be:
- Choose the patches that maximize StuffYouNeed
- Among those, choose the patch that minimizes TotalStuff, choosing arbitrarily among ties.
Agreed, this is quite similar to the approach I used in the prototype which worked quite well in the ift demo. For reference a description of the criteria I used is here. In the prototype I used total # of code points in the patches subset definition as a proxy for TotalStuff.
This is a very simple strategy and seems like a good starting point for discussion.
Now, since TotalStuff is hard to calculate one approach would be to add a numeric field representing that number to the patch map. I think we've discussed something along these lines in the past with the expected distaste. What does it represent? How would the encoder calculate it? Do we really need to pay the cost for it in the patch table? What about patches listed multiple times? etc. I share these negative thoughts.
Happily, we don't really care the numeric value, it's just there to break ties relative to StuffYouNeed. And I think we're not currently using the patch map order for anything. (It clearly doesn't factor into the extension algorithm, but there might be order-based size optimizations I'm not remembering. If there are, think in terms of an added numeric field to represent the TotalStuff order, and unpacking the patch maps into those orders before analysis.) So the encoder could order the patches in the maps by "increasing TotalStuff", and the second part of the updated criterion would be:
- Among those, choose the first patch.
So in practice the algorithm would just maintain the current patch maximizing StuffYouNeed as it goes through the list.
Now, going back to generality, the encoder doesn't have to think in terms of TotalStuff. It can just think in terms of the StuffYouNeed criterion and which patches have priority over which other patches when there are ties. For example, while there is necessarily a "total ordering" in the map, from the encoder's perspective there might just be a set of partial orderings arbitrarily collapsed into the map.
Agreed, I like the approach of using ordering in the patch map to provide a priority signal, since that's basically free from an encoding standpoint. Definitely better then using code point count like I did in the prototype.
Concretely here's what I'd propose for spec changes: in patch selection for selecting within full or partial invalidation groups mention that a client should pick a patch which has the most StuffYouNeed, but don't define exactly how that's calculated (we could make some recommendations) and then specify any ties are broken by the actual entry ordering within a patch mapping. I'm leaning towards not explicitly defining StuffYouNeed as I think there's likely some variability here in what things specific clients might value.
As an example let's say the client currently needs some subset of the latin alphabet and the user is on a non-metered and fast connection. The client might reasonably decide that StuffYouNeed includes the rest of the latin alphabet as it's likely to be needed eventually and would thus favour patches which contain more of latin. Since the users connection is fast its worthwhile to grab that preemptively. However, if instead the user was on a metered and/or slow connection the client may instead value getting only exactly what it currently needs to minimize byte transfer.
In the prototype I used total # of code points in the patches subset definition as a proxy for TotalStuff.
Sounds like you're already leaning towards the map ordering system, but just for the record here it might be worth capturing the problems with using the patch map contents for this. There are at least two:
I don't want to claim these are normal cases, but when we get into e.g. emoji fonts I worry we could run into some of these things.
Concretely here's what I'd propose for spec changes: in patch selection for selecting within full or partial invalidation groups mention that a client should pick a patch which has the most StuffYouNeed, but don't define exactly how that's calculated (we could make some recommendations) and then specify any ties are broken by the actual entry ordering within a patch mapping. I'm leaning towards not explicitly defining StuffYouNeed as I think there's likely some variability here in what things specific clients might value.
I don't think I'm in the same place, so maybe this is something to dig into over our next few meetings.
The general approach we're taking is smart encoder/simple client. Because the analysis is done up-front and isn't time sensitive, the encoder can expend as much energy as it likes. That doesn't meant that the client can't expend any extra energy, but "freedom" on the client side more or less equates to unpredictability on the encoder side. I think there's more benefit to focusing potential cleverness on where we have the cycles.
As an example let's say the client currently needs some subset of the latin alphabet and the user is on a non-metered and fast connection. The client might reasonably decide that StuffYouNeed includes the rest of the latin alphabet as it's likely to be needed eventually and would thus favour patches which contain more of latin. Since the users connection is fast its worthwhile to grab that preemptively. However, if instead the user was on a metered and/or slow connection the client may instead value getting only exactly what it currently needs to minimize byte transfer.
I don't disagree with the need for clients to change their behavior like this but the question is what level to address it on. I would handle this particular example by allowing the client to adjust the subset parameters rather than the algorithm. So it might go through the map tracking two parameter sets and then decide between them.
However, it's worth noting that if the client wants to determine the cost of pre-loading more codepoints, rather than simply making an executive decision, it can't effectively do that in the general case. If you've got table-keyed patches you can only guess at either the relative sizes or (in many cases) how many further patches you may need to load. All the problems with estimating TotalStuff accurately come back into the picture. I guess with this new system we could recommend some sort of weight based on patch sort order but it would be pretty cheesy.
So moving back up to the general question, are there case where:
Thinking about possible language for client flexibility, so here's a very rough draft. (I'm being really sloppy with terminology here rather than searching out all of the definitions I should be using.) This is relative to the tiebreak-by-order standand and a fixed algorithm.
In some cases a client may wish to include additional codepoints, features, or axis ranges in a parameter set, e.g. if they may be needed in the near future and patching at that time could introduce noticeable latency. However, some combinations of parameters may be more expensive to load than others and the client may wish to assess those relative costs in deciding what to load. The patch map does not include the sizes of each patch, and even if it did there would only be entries for the current set of invalidating patches. Still, here are some general guidelines for judging relative costs:
When comparing among full or partial invalidation patches, the first question is whether one patch includes all desired codepoints or whether an additional patch will be needed, as the expense (in latency) of one or more round-trips will almost always outweigh the expense of additional bytes in a patch. There is also no way of determining how many further patches will need to be loaded. Therefore, if all of multiple competing options require loading additional patches it is difficult to assess the cost beyond the fact that more codepoints will typically involve loading more data.
In cases where there are different invalidation patches that entirely satisfy respective different parameter sets, it can be assumed that patches occurring later in the map are larger than those earlier in the map. The best proxy for the relative sizes of the patches is probably the number of codepoints listed in the corresponding map entries, but this is at best a rough and unreliable heuristic and in cases where it conflicts with the map ordering the ordering should take precedence.
Although there are corner-cases, glyph-keyed (and, therefore, no invalidation) patches can be considered independent of one another for purposes of speculative subset parameters. Because they are loaded independently an encoder may not have put much thought into their ordering in the map, so the codepoint count in the map entry is the best available proxy for a patch's relative size. Because a patch may be listed in the map more than once a thorough client might want to seek out map entry with the largest number of listed codepoints as the proxy.
Although a client is free to use these heuristics or any others to arrive at set of subset parameters to load, the patches it loads to satisfy that set, and the order it loads and applies the patches in, must be the same as what is specified by the extension algorithm for that set.
I suspect that the lack of a specific criterion for choosing the next patch during extension will actually interfere with generality. The more information an encoder has about what the client will do, the more clever and "general" it can be about how it encodes. As I've indicated in #232 I think the considerations for glyph-keyed patches will be distinct from those for invalidating patches so I'm only speculating about the latter here. I'm also going to skip the variable subspace problem because I'd need to think more about how to add that in to this framework.
My thoughts about this have evolved primarily from thinking about "one-round-trip encodings" -- encodings such that each patch map has a patch that loads all remaining parameters, and then other patches that load various subsets of those parameters. How do you organize the overall system sensibly to allow for that?
Let's think about two per-patch measures:
If you could measure TotalStuff, it seems like a good criterion would be:
This is a very simple strategy and seems like a good starting point for discussion.
Now, since TotalStuff is hard to calculate one approach would be to add a numeric field representing that number to the patch map. I think we've discussed something along these lines in the past with the expected distaste. What does it represent? How would the encoder calculate it? Do we really need to pay the cost for it in the patch table? What about patches listed multiple times? etc. I share these negative thoughts.
Happily, we don't really care the numeric value, it's just there to break ties relative to StuffYouNeed. And I think we're not currently using the patch map order for anything. (It clearly doesn't factor into the extension algorithm, but there might be order-based size optimizations I'm not remembering. If there are, think in terms of an added numeric field to represent the TotalStuff order, and unpacking the patch maps into those orders before analysis.) So the encoder could order the patches in the maps by "increasing TotalStuff", and the second part of the updated criterion would be:
So in practice the algorithm would just maintain the current patch maximizing StuffYouNeed as it goes through the list.
Now, going back to generality, the encoder doesn't have to think in terms of TotalStuff. It can just think in terms of the StuffYouNeed criterion and which patches have priority over which other patches when there are ties. For example, while there is necessarily a "total ordering" in the map, from the encoder's perspective there might just be a set of partial orderings arbitrarily collapsed into the map.
Now, what about #222 ? Well, my initial thought is that I can't think of a relevant instance where I would use that mechanism for an invalidating patch. Any given subset will include the glyphs or not and what an invalidating patch encodes is the difference between two subsets. So it shouldn't be a factor. And if it is a factor anyway, it's because of the generality approach of the extension algorithm, which is something that can either be revisited or finessed. (In terms of finessing, we could specify a BS StuffYouNeed calculation for such patch entries with a warning against creating a map that uses it relevantly in practice.)