adobe-type-tools / afdko

Adobe Font Development Kit for OpenType
https://adobe-type-tools.github.io/afdko/
Other
1.06k stars 167 forks source link

[fea syntax] OpenType Variation support? #153

Open twardoch opened 8 years ago

twardoch commented 8 years ago

@readroberts @brawer etc.,

Do we have any plan in place to extend the FEA syntax to support OpenType Variation fonts?

This would be a major change of course. Correct me if I'm wrong, there are two major aspects to be revised:

FeatureVariations table

fontTools already supports FeatureVariations so using .ttx syntax, one could build an rvrn feature. But how would be express this in .fea syntax?

ValueRecord variations.

Before 2001, AFDKO did support CFF-based Multiple Master OpenType fonts, so there must have been some syntax to support "per-master metrics data" for metrics, kerning etc. Also, the FEA syntax has some provisions for device-specific adjustments. As we know, GPOS/GDEF/JSTF data in OTVar is inspired by the device adjustments.

General approach

Currently, my understanding is that Google's fontmake uses a method where OT features in each master are compiled using feaLib or AFDKO, and then the fontTools/varLib code is called to add variation data to GPOS.

This generally opens up a legitimate question:

  1. Do we want to go the fontmake path, where a complete or partial FEA file would be provided per master, and then code would somehow "glue" it together? This would allow for re-using most or all of existing FEA code, and require smaller changes to the building libraries and FEA syntax. Of course this is always possible, i.e. this path can be used without changing the syntax at all.
  2. Do we want to express the ValueRecord variations in some list, akin to the current device metrics spec in the FEA syntax? This would get a bit ugly and require serious restructuring of the FEA syntax for existing projects.
  3. Do we want to go yet another path, and that might be adding a set of keywords similar to languagesystem, script and language but refer to OTVar.

I can think of something like:

In the prolog section of the FEA file, we could have new keywords that could be used to compile the "fvar" table (perhaps), and to "set up" the general rules for how variations work. This would be somehow analogous to how languagesystem works:

varmaster wght 300.0 400.0 700.0;
varmaster wdth 75.0 100.0 120;
languagesystem latn dflt; 
languagesystem cyrl dflt; 

Then, inside any lookup, there could be some syntax that defines the values for the variation. Roughly, it could look like:

feature kern { 
  script latn;
  language dflt; 
  lookup kern1 { 
    # Variation data for the default master
    pos A A -50;
    # Variation data for the wght -1 wdth -1 master
    # Note: this data will need to be calculated against the default data to obtain the deltas
    # If the default data does not contain an entry, then the value 0 is assumed in the default data.
    var wght -1 wdth -1; 
    pos A V -300;
    pos V A -250;
    # Variation data for the wght 1 wdth 1 master
    var wght 1 wdth 1;
    pos A V -20;
    pos V A -15; 
  } kern1; 
  script cyrl;
  language dflt; 
  lookup kern1;
} kern;

With an approach like this, users could be flexible in using the include() statements for various masters, i.e. the whole thing could be expressed as:

feature kern { 
  script latn;
  language dflt; 
  lookup kern1 { 
    var wght -1 wdth -1; 
    include(kern_ThinCondensed);
    var wght 1 wdth 1;
    include(kern_BlackExtended);
  } kern1; 
  script cyrl;
  language dflt; 
  lookup kern1;
} kern;

This way, "old-style" FEA files such as for the Source Sans Pro project: ExtraLight kern.fea and ExtraBold kern.fea could be used.

Note: The syntax I'm proposing is just an "idea".

readroberts commented 8 years ago

I've been thinking about this, and it is not yet obvious to me which way to go. At the moment, I lean towards the approach used by fontmake: use current tools to build a full OpenType font, then run a script to merge the master designs into a variable font. This appeals to me because a high level design idea behind the feature file syntax is to not have to specify data which can be derived from the sources, and much of the new variable font structures can be derived from the master designs. Also, the way a variable font is expressed lends itself to this approach: basically, as a single normal OpenType font, with extra tables containing the differences between that and the other master designs ( well, not exactly, but close enough). With current workflows, I don't think we need an extension of feature format for most of the variable font data.. This is actually separate from the variable font issue, but I have noticed that all the major font development tools now have readable source data formats, which would allow fontmake to generate the kern data and design space data directly from the sources. The AFDKO doesn't do this because when it was developed, source font data was proprietary, and the only place human-readable and editable kern data was stored was the feature files. This is no longer the case. I've discussed this with Miguel Sousa and Frank Grießhammer, and to us it looks like a good workflow would be to require the default font to have a fully produced OpenType font, with all the GSUB and non-kern-GPOS feature, and then draw on the master design sources to extract the kern data and all the other delta data. Another advantage of this approach is that it avoids having to re-work the feature files whenever you make blend design space changes. Just in working for the last few weeks with some CFF2 test fonts, I have found myself often changing which master is the default, and playing the design space position of the masters. If any of the metrics had been encoded in in the feature files, this would have been a lot of work. Feature file extensions are then needed only data which is not in the masters. The STAT table will need to be supported, as it currently does not have any external expression.

All that said, my experience is that we will still need to come up with feature file syntax for almost everything in order to allow overriding source-derived values.

For kern and other metric data, I still like the old MM syntax, where individual values were simply replaced by the list of equivalent values form each master design font, in angle brackets:

feature kern {
  pos A V <-25  -22 -27>;
} kern;

However, I think I like Adams' suggestion even more, as it not only lends itself to easy adaption of current workflow, but also avoids the problem having to regenerate all the pos statements whenever the designer makes changes to the master positions in design space; only the master references need to be edited. Let's have a lot more suggestions about alternatives!

twardoch commented 8 years ago

Read,

I actually do like the fontmake approach, and I think having "per-master" positioning syntax is sensible. However, we still have the problem of FeatureVariations (rvrn feature and any other feature which changes its implementation at some variation space section): https://www.microsoft.com/typography/otspec/chapter2.htm#featvartable

I think we'll need some way to specify these conditions. The "script" and "language" keywords have been a kind of conditions in both FEA and binary OTL forever. With FeatureVariations, we get a new kind of conditions essentially. The ConditionTableFormat1 needs a variation axis id and two values: min and max. Multiple such conditions are expressed in a ConditionSet collectively (which allows to define conditions that kick in only in certain combinations of the axes segments).

So at least for this, we'd need some way to specify these conditions.

I think in FEA, the axis should be expressed as a tag rather than index.

So we need an idea how to express the condition. For example if my general variation source is 2D and has axes "wdth" and "wght" going from.-1 to 1, I might want to apply a different feature set if wdth >= 0.5 and wght >= 0.5.

Even though the implementation talks about exchanging the whole FeatureSet, in practice, it'll be more like feature sets for languagesystems "latn dflt", "cyrl dflt" and "cyrl BGR" which would normally have the same dozen features but "cyrl BGR" would also have "locl".

FEA has a decent mechanism to control the languagesystem conditions. You set up the general rule using "languagesysten" and then within feature definitions, you can fine-tune it via "script" and "language".

I think something like:

varcondsys dflt; 
varcondsys wght 0.5 1 wdth 0.5 1; 

feature liga {
varcond dflt; 
lookup liga1 {
 sub f i by fi;
} liga1; 
lookup liga2 {
 sub f l by fl;;
} liga2; 
varcond wght 0.5 1 wdth 0.5 1; 
lookup liga2;
} liga;

The example above would define at the beginning the "varcondsys" entries -- a list of all possible ConditionSets. Any feature definition that does not explicitly use the "varcond" keyword within it would get registered in the FeatureSets for all the ConditionSets.

But in case of liga, we would have two behaviors: "varcond dflt" would define the implementation for the default conditions (two lookups are applied) while "varcond wght 0.5 1 wdth 0.5 1;" would define the implementation if wght >= 0.5 and wdth >= 0.5;

Ideally, the coordinates in the conditions should be expressed in user coordinate space, not normalized coordinate space, because "regular" font developers would not know much about normalized coordinate space. But that would require much more knowledge of the FEAV compiler (it would need to process "avar" etc.). So I'd settle for normalized space.

Overall, the above mechanism is rather similar to how LangSys definitions work in FEA, so it should be possible to reuse some logic.

twardoch commented 8 years ago

Given that it is more common than not these days to separate GSUB and GPOS, perhaps it might be sensible to extend FEA by these keywords:

table GSUB [variationspace] { 
} GSUB;

table GPOS [variationspace] { 
} GPOS;

This, again, would simplify working with includes, especially if the optional conditions section could allow variation master definitions, e.g.

table GSUB { 
} GSUB;

table GPOS varmaster wdth -1 wght -1 { 
} GPOS;

table GPOS varmaster wdth 1 wght -1 { 
} GPOS;

Or perhaps

table GSUB { 
} GSUB;

table GPOS <-1 -1> { 
} GPOS;

table GPOS <1 -1> { 
} GPOS;

Inside the new "table GSUB" and "table GPOS" blocks would be the traditional FEA contents.

twardoch commented 8 years ago

With the "table GSUB" and "table GPOS" proposal comes a new idea, for makeotf specifically: a "keep layout tables" option. If my font already has a GSUB table compiled and I only want to compile a GPOS and my FEA only has GPOS stuff, I should be able to do it without destroying the existing GSUB. In other words, I'd love to be able to run makeotf in multiple passes.

miguelsousa commented 8 years ago

@twardoch the conditions should be defined in the designspace file, no? I think Erik is on it: https://github.com/LettError/MutatorMath/issues/55

miguelsousa commented 8 years ago

If my font already has a GSUB table compiled and I only want to compile a GPOS and my FEA only has GPOS stuff, I should be able to do it without destroying the existing GSUB. In other words, I'd love to be able to run makeotf in multiple passes.

Sounds more like a job for fontTools and its feaLib.

twardoch commented 8 years ago

@miguelsousa AFAIK, the Superpolator rules are very simplistic, they only allow for simple substitutions. FeatureVariations can define conditional ligatures, contextual substitutions, positioning etc.

Of course it would make sense for a tool to convert from .designspace rules to some FEA code that expresses FeatureVariations.

Also: I'm talking here about the FEA syntax. Many workflows use the FEA syntax but don't rely on .designspace. I think building the OTL tables should be possible as a separate step from building the actual font file.

The FEA syntax has been implemented by MakeOTF, feaLib and FontForge. I agree the syntax discussion is a bit separate from the tool, but only a bit.

In the past, FEA and MakeOTF could be used to completely define sources for GSUB, GPOS, GDEF and BASE. There were higher-level expressions for some of the data, such as group kerning within UFO, but overall, FEA has proven itself to be very popular.

twardoch commented 8 years ago

I've mentioned several ideas in this thread, and I think all those that do not affect the logic of the OTL features for a given instance can be offloaded to a different process, e.g. the creation of the positioning deltas across variation masters. This can be indeed done by providing several separate FEA files which a tool "blends", like Google's fontmake. We don't have to change the FEA syntax for that.

But to get reasonable support for FeatureVariations, I don't see how this can be easily without extending FEA.

Imagine an Arabic font which in the "wdth" axis only adjusts the width of the kashida stretching glyphs and of stretchable Arabic letters that have the horizontal descenders. In addition to that, the designer might want to gradually switch on stacking ligatures, so glyph sequences that join horizontally if there is ample horizontal space would start joining vertically if there is less space. Such a strategy could be used for other cursive scripts, and even Latin fonts.

Similarly, we might want to enable other contextual substitutions when a certain width or weight threshold is exceeded.

For that, proper support for the FeatureVariations conditions would be necessary, especially in GSUB.

Implementing conditional switching of GPOS adjustments in kern/mark and combining it with the blending of GPOS may indeed be tricky and perhaps out of scope, though the syntax I proposed would still be compatible and allow it in theory.

Whether it'll be MakeOTF or feaLib or FontForge that implements the syntax changes first, or ever, is another story.

twardoch commented 8 years ago

FeatureVariations cont'd

Alternative approach to the FeatureVariations conditions syntax, inspired a bit with how named lookups and the featureNames work:

feature rvrn {
  sub O by O.alt;
  featvar varEuro {  
    ifvar wght 0.7 1;
    sub Euro by Euro.bold;
  } varEuro; 
  featvar varOslash {  
    ifvar wght 0.5 1;
    ifvar wdth 0.5 1; 
    sub Oslash by Oslash.nocounter;
  } varOslash; 
} rvrn;

In this syntax, we would assume that the for all defined features, if a feature does not have the featvar block, the feature gets included in all FeatureVariations FeatureSets (collected from all definitions in the FEA file), but if a feature does have a featvar block, then that feature gets split. Whatever is outside the featvar block is included as default (which may be empty). Then, we have featvar blocks which get names (each named featvar block corresponds to a ConditionSet). Inside the block, each ifvar keyword uses the syntax

  ifvar <axisTag> <minValue> <maxValue>;

Of course, all ifvar conditions within one featvar spec are taken with a logical AND, as in the OT spec. After the ifvar conditions are specified, the alternative implementation of a given feature follows, which may include lookups etc.

As with named lookups, I should be able to define featvar blocks outside of feature blocks, and then reference them from inside the feature blocks, even across different features. Example:

featvar varEuro {  
    ifvar wght 0.7 1;
    sub Euro by Euro.bold;
} varEuro; 
featvar varOslash {  
    ifvar wght 0.5 1;
    ifvar wdth 0.5 1; 
    sub Oslash by Oslash.nocounter;
} varOslash; 

feature rvrn {
  featvar varEuro; 
  featvar varOslash;
} rvrn;

feature ss04 { 
  sub O by O.alt;
  featvar varEuro; 
  featvar varOslash;
} ss04; 

Of course instead of featvar, we could use featureVar or featureVariation, and instead of ifvar we could use ifVar or ifVariation.

LettError commented 8 years ago

The conditions need to, somehow, combine information from two different types of data:

It would be useful to be able to change weight > 0.5 to weight > 0.4 without having to edit the whole feature file. Maybe not all weight > 0.5 statements are the same. The decision to add a specific conditional switch somewhere would not need to rely on knowing the exact axis values.

Maybe there is a way to separate the definition of the condition (with its geometric references) and its application within the feature blocks. A set of named rules, each with any number of conditions, to be defined somewhere in the beginning of the feature text. Then whenever it is needed in the feature text, a rule can be invoked by name. The rule remains active until the end of the block, or until a new rule is called.

In fantasy feature text:

# before other code, define a rule with a set of conditions:
rule oslashcounter {
     wght 0.7 1;
     wdth 0.5 1;
} oslashcounter;

# then inside a feature block, call the name of the rule to have it evaluated
feature aaaa{
     sub @something by @somethingelse;
     rule oslashcounter;
          sub Oslash by Oslash.nocounter;
          sub oslash by oslash.nocounter;
}aaaa;
LettError commented 8 years ago

Perhaps even allow the rules to be nested:

rule heavy {
     wght 0.8 1;
} heavy;

rule narrow {
     wdth 0 0.3;
} narrow;

rule dollarbar {
     rule heavy;
     rule narrow;
} dollarbar

feature aaaa{
     rule heavy;
        sub w by w.compact;
     rule dollarbar;
        sub dollar by dollar.nostroke;
}aaaa;
readroberts commented 8 years ago

This appeals to me. This format also makes it easy to define the default lookups as well as several different rule-based lookups:

feature rvrn {
       sub w.compact by w;
       sub dollar.nostroke by dollar;

rule heavy;
        sub w by w.compact;

rule dollarbar;
        sub dollar by dollar.nostroke;
} rvrn;

When no rules apply, you get the default substitutions. When any rule applies, you do not get the default substitutions, and you do get the substitutions whose rules are satisfied. In the font data, the default lookups are written in a regular GSUB feature, and the rule-based substitutions are defined in ConditionSets in the GSUB FeatureVariations table. If there are no default substitutions, the feature will be written as a regular GSUB feature without any lookups.

twardoch commented 8 years ago

@LettError I like your proposal a lot!

petrvanblokland commented 8 years ago

Additional to this, note that rules can become more complex than a simple boolean equation. If the $-bar strike-through switch happens on a different weight value for condensed width than for extended width, the connecting line is not vertical or horizontal. It may not even be a straight line in the 2-dimensional design space. In that case the rules must define a "staircase" approximation of the "watershed" line, by combining a number of weight & width rules. That complexity is favouring Erik's nesting proposal. And it may even need to be extended with more boolean and/or/bracket operators to define the complete rules. Just summing up "ifvar + condition" is not enough. How to combine a set of "and" rules with a set of "or" rules? With 3 axes this "watershed" between glyph shapes can become a Minecraft wall of cubes, approximating a double curves surface. Hard to imagine or visualize, especially if there are multiple breaks for alternative glyph shapes in the same design space. But at least readable rule syntax helps there. As these rules needs to be applied "per glyph", the re-use of rules and conditions for multiple glyphs is also important.

readroberts commented 8 years ago

Extending the syntax to include 'and' and 'or' in the if clause would make it possible to define a set of regions in the blend design space to approximate a watershed line that is not orthogonal to all axes. Because of the underlying data structures, I don't think that full boolean logic can be supported: the FeatureVariations table supports only 'or's between sets of 'ands'. For the implementation, each 'and'ed rule would be added to the current ConditionSet, and each 'or' would trigger the start of a new ConditionSet. I don't see a way to support an 'and' between two parenthetical clauses that contain an 'or'. However, the current support is enough to do what Petr is describing, which certainly is useful.

anthrotype commented 6 years ago

Any news on this? How can we move this forward?

readroberts commented 6 years ago

I still like the proposal. Let's do it.

twardoch commented 6 years ago

+1 :)

twardoch commented 4 years ago

I just realized — this syntax could actually be extended even more, to support variable positioning:

  1. The rule keyword basically defines regions in the variation space.
  2. Variable fonts use regions to define master positions (which is not really supported in designSpace, which is another cause for concern)
  3. Either the rule conditions could be extended to allow 1, 2 or 3 values, or another keyword (like master) should be created to allow conditions with 1 value (simple master location) or 3 values (min, peak and max)
  4. The extended rule syntax or the new rule plus master syntax would be used to create named locations and regions.
  5. Then, they (in either form) could be used to define interpolable positioning statements. It’d be possible to do something like this:
# Silently, positioning statements which are not placed in a master (or extended rule) are placed in the neutral position so all axes are 0

master Regular {
     wght 0;
     wdth 0;
} Regular; 

# If a master does not define a location for an axis, the neutral (0) is assumed, here wdth 0;

master Bold { 
    wght 1; 
} Bold;

# A region or set of regions could also be defined

master Semibold {
    wght 0.3 0.5 0.7;
} Semibold;

# Then we could write GPOS

feature kern {
master Regular;
    pos A V -10;
master Semibold;
    pos A V -20;
} kern;
moyogo commented 3 years ago

Should we make a PR? What was the consensus?

simoncozens commented 3 years ago

I’ve been thinking about this this morning. I’m now convinced of the idea of having one feature file which contains the rules for the whole variable font, rather than trying to interpolate between different masters’ files. (Too easy for them to go out of sync.) Adam’s latest idea looks good but I wonder if there is a way to better scope variation information to a rule. The danger of

feature kern {
master Regular;
    pos A V -10;
master Semibold;
    pos A V -20;
} kern;

is - what does this do?

feature kern {
master Regular;
    pos A V -10;
    pos A Y -10;
master Semibold;
    pos A V -20;
} kern;

Throw an error, I guess, but it's annoying for tooling to have to match up the rules.

Glyphs3's tokens approach looks quite interesting. You define a set of numbers for each master: padding might be 10 in regular and 20 in bold, then say pos W W $padding;. Obviously that wouldn't be pleasant for an entire kern table, but it's a useful approach for other kinds of positioning rule.

Another downside of Adam's approach is that it incorporates axis definition information which really should be part of designspace or similar. Why not have the tools read a designspace file and pick up the master definitions from there?

simoncozens commented 3 years ago

Another thought to go alongside my last point: how about seeing it the other way around? With a sufficiently good master statement, you don't need a designspace file, but can generate fvar and avar tables from the feature file. This is consistent with the idea that we're using feature files to generate other OT tables beyond GSUB and GPOS.

In fact, you could even define your source files inside the master statement too. Hmmm.

madig commented 3 years ago

But how do you then generate statics instances? The main problem with feature files is that they're blobs you have to parse into structure. DS files are structure right there.

simoncozens commented 3 years ago

I'm not sure the parsing issue is relevant. If you have an XML parser you can parse XML and if you have a FEA parser you can parse FEA.

madig commented 3 years ago

Yes, but the XML is data, FEA is code.

simoncozens commented 3 years ago

I still don't see why that matters. (And I don't agree - everything in section 9 is definitely using FEA as a data format, not a programming language.) You have to parse it into structure? Thankfully we have computers which can do this for us. It's not a big deal.

simoncozens commented 3 years ago

The more I think about this, the more I think Adam's proposed syntax isn't sufficient, and I worry that we're doing the standard OpenType thing of thinking of a solution that works reasonably well in simple cases and then hacking the hard cases into it later, instead of thinking of the hardest and weirdest (plausible) needs first and work out a syntax that works for them - at which point the simple cases will be obvious.

I had a good conversation with @tamirhassan yesterday and he said "Why are you talking about varying rules per master? Surely you want to vary them based on areas of the design space?", and I realised that he's dead right.

Here's my hard but plausible case: I have an Arabic font with a weight axis and two masters, wght=400 and wght=1000. I want to make a contextual vertical kern for the glyph sequence "[lam-ar.init lam-ar.medi] beh-ar.medi [twodotshorizontalabove-ar threedots-horizontalabove-ar]' [alef-ar.fina lam-ar.fina lam-ar.medi ...]" (e.g. لتا) so that once the weight goes over 800 and the dots get too fat to fit inside the two vertical strokes, they are raised up by an additional 350 units. How should I express this? I don't have a good answer, but here's a proposal to get things started.

pos @tallglyph beh-ar.medi @widedots' <0 350 0 0 (wght>800)> @tallglyph;

(I also realised we need to be able to specify both rules that interpolate and rules that operate "at a point" - I don't want this kern to interpolate from 0 to 350 as we go up the weight axis.)

This syntax so far is cumbersome but I want us to focus our attention on the fact that it's the value record which varies, not the rule.

simoncozens commented 3 years ago

Here's what my case looks like: https://twitter.com/simoncozens/status/1366784085780217860

I had to hand-hack the TTX file to get it working...

punchcutter commented 3 years ago

I'm basically doing the same thing right now and relying on hand edited TTX, but one big difference is I am applying to entire features and not just one pos rule. One thing I ran into is that with mark positioning it works to actually swap out a lookup in the FeatureTableSubstitution and the position changes to what's in the new lookup, but with features that add or subtract advance width the effect is cumulative. For example at wght=700 I want to switch from

position \1401 <0 26 0 -87>; to position \1401 <0 36 0 -120>;

but that will add the values instead of just swapping the lookup (at least in places I could test). So the second lookup needs to be rewritten as only the deltas: position \1401 <0 10 0 -33>; In this case I'm looking at the entire feature so it would be nice to say

feature vpal {
    variation wght<=699 lookup 1;
    variation wght>=700 lookup 2;
} vpal;

Like @simoncozens says we need to allow both "at a point" and "between points", but also maybe we want to interpolate from 100-500 and then jump at a point to a new value and continue interpolating there from 500-900. That would mean we'd want something like 4 values to allow interpolation from 1-2, jump to 3, interpolate from 3 to 4.

simoncozens commented 3 years ago

Yes, I meant to mention that there are two ways we need to vary layout. (And actually what we first need to do is sit down and read through the OT spec at all the ways things can vary, and then make sure we have syntax which covers all those potential cases):

simoncozens commented 3 years ago

I just got this working my FEE language. The syntax isn't final, but it's interesting.

DefineClass @tall = [lam-ar.init alef-ar.fina];
DefineClass @beh = /beh/;
Routine popdots {
    Position twodotshorizontalabove-ar
        <yPlacement=[
                0@wght=200
                0@wght=599
                200@wght=600
                200@wght=1000
        ]>;
};
Feature kern {
    Chain @tall @beh twodotshorizontalabove-ar (popdots) @tall;
};
bobh0303 commented 3 years ago

I've was just pointed to this thread and admit I haven't kept up with issues related to variable fonts and FEA. But this question occurs to me: How do you extend your example to handle combining marks above either/both of the @tall glyphs? How do you also move any combining mark above the beh? Duplicating the above for every possible combining mark combination is gonna get messy really fast -- this is a problem that hasn't, afaik, been solved well for static fonts, much less variable ones (except by non-OT technologies).

simoncozens commented 3 years ago

I might be missing something in your question, but it’s easy. Use a mark filtering set to ensure you only care about bases and nukta, and any combining marks on the nukta will be repositioned by mkmk. (Assuming that you do any contextual nukta magic before mkmk.)

cjchapman commented 3 years ago

Just in case anyone following this issue hasn't seen it, Simon's added a link to a Doodle poll for a meeting to discuss feature file syntax next week here: https://github.com/adobe-type-tools/afdko/issues/1202#issuecomment-792087597

bobh0303 commented 3 years ago

I might be missing something in your question, but it’s easy. Use a mark filtering set to ensure you only care about bases and nukta,

Marks above one/both of the @tall glyphs are likely to collide with the newly raised nukta. At first I was thinking you'd need to raise the nukta further, but more likely you'd want to push the marks up from their default position. So you'll need other contextual rules that recognize such patterns and reposition the marks after mkmk. And the amount of change in mark position depends on exactly what the nukta was (2 dots horizontal? 4 dots? etc) and possibly what the combining marks are.

simoncozens commented 3 years ago

Ah, OK, collision detection. I have a solution for that (but not for variable fonts). This is getting off topic so let’s talk about it at the Feature File Format Chat. The quick answer is that I have a shaping engine I can call within my compiler to position arbitrary glyph sequences using the current set of rules, and then check for collisions.

Lorp commented 3 years ago

Regarding the variations part of @simoncozens’ example:

<yPlacement=[
      0@wght=200
      0@wght=599
      200@wght=600
      200@wght=1000
]>

… we need to think about how the syntax will compile under various circumstances.

Default, Max and Min

If the default is at 400 we probably want the value to be 0. And if the default is at 700, we’d probably want the value to be 200, in other words interpolating between the explicit values. We need to know what to do if this is different from the default value already in the font.

We also need to specify what happens if (in the above example) 200 and 1000 are not the designspace min and max. Since we’re not specifying deltas and tuples directly, we might allow those values to extend (or reduce) to the real min and max. If so, then we’d only need to state 599 and 600 directly, and have the min and max take the values 0 and 200 respectively.

Multi-axis fonts

If we add a single new entry for a Width axis such as:

      250@wdth=200

Now, because we are no longer using deltas, we need to specify also what happens at the “corner” of max weight and max width. Otherwise it is unclear whether the corner should take a value of 200 (max wght) or 250 (max wdth). We must “complete the orthogonal grid” in many situations like this, including all intermediates in multi-axis fonts. Otherwise we have to guess which axis takes precedence, work out an arbitrary interpolated or minmaxed value, or some other ugly heuristic. Thus:

      270@wght=1000,wdth=200

Abrupt changes in variation stores

A problem in the example is that actual behaviour between 599 and 600 is not as intended: a wght of 599.25 will result in an interpolation between 0 and 200, thus 50.

Axis locations are encoded as Fixed 16.16 (32 bits), thus the resolution is 1/65536. Instead of 599 we should have (600 - 1/65536). A syntax for “x+1/65536” and “x-1/65536” is therefore desirable for this common use case of non-interpolating value changes using variation stores. Some suggestions:

wght=600-
wght=600+

wght=<600
wght=>600

wght=600<
wght=600>

wght=600-dx
wght=600+dx

In fact, the compiler needs to be aware that these values become normalized Fixed 2.14 values in variation stores, so these incrementally greater and lesser values have to take that into account, thus compiled to normalized values that are +/- 1/16384.

simoncozens commented 3 years ago

Re explicit regions versus locations, see this fontTools discussion: https://github.com/fonttools/fonttools/issues/2207

(However note that Just starts talking about a separate issue, so there are really two very distinct conversations in that issue.)

simoncozens commented 3 years ago

Thank you all for the discussions the other day. I think I would like to try and drive things forward in terms of agreeing a syntax. Just to state my assumptions here:

Here's my proposal for varying scalars, then. I don't really care very much about feature replacement - I just don't have a use case for it personally - so I'm happy for someone else to interpret what a feature replacement syntax would look like based on this.

A variable scalar replaces a static number value, and is introduced with parentheses. (Parens chosen because they currently have no other semantic value in AFDKO.) Within the parens are any number of masters (using the term in the fontTools.varLib sense of any scalar specified at a given location), separated by whitespace. A master is defined as being a list of axis=value locations joined by commas, followed by an equals sign and the value of the scalar at that location. (Note that this is the opposite order to my previous proposal, but I think it reads better.)

pos A B' <0 (wght=200:-100 wght=900:-150 wght=900,wdth=150:-120) 0 0> C' D;
josh-hadley commented 3 years ago

@simoncozens thanks so much for organizing the meeting and for keeping this discussion moving.

I agree that the next big thing to do is to get agreement on syntax. So I will start a PR that updates the Open Type Feature File Specification document and incorporate your latest suggestion for varying scalars as well as @punchcutter's proposal for feature replacement using the language and formatting of the spec. Folks who are interested in this issue can then review and make suggestions there, which I think is probably a more efficient mechanism than trying to hash it out in issue comments.

simoncozens commented 3 years ago

An implementation of the proposed variable scalar syntax can be found in the fonttools pull request above.

Lorp commented 3 years ago

Particularly for those of us unfamiliar with fontTools.varLib, could you please elaborate how multi-axis interaction is specified? For example, if I set a wght max and a wdth max, what does the syntax imply about when wght and wdth are both at max? Or, as in your example, there’s apparently a wght/wdth max specified but no wdth max, which is ususual in design terms (unless the syntax implies something about wdth max).

Also, is it required to specify default behaviour explicitly, or does the syntax allow for default to be interpolated between min and max? So, in your example, could the default be at 400 thus its value interpolated as -100+((400-200)/(900-200) * (-150- (-100))) = -114.28?

simoncozens commented 3 years ago

Maybe my example was unusual in terms of design, but the interpolation mechanism is not varLib specific; it's precisely the same as what is used to get points in glyph outlines from masters and vary them in the designspace. (Literally the same mechanism, in fact...)

So: assuming a weight axis min=200,default=200,max=1000 and a width axis min=50,default=100,max=150

If an axis is not specified in a location (e.g. wght=200 has no width axis) then the location is assumed to be the default value on that axis - so it is equivalent to wght=200,wdth=100. As a shortcut we could allow a bare value to represent the default position on all axes.

Currently the implementation requires you to specify the default situation explicitly, but I suppose in theory there's nothing in the syntax stopping you from creating min and max cases and have the default interpolated - again, just like designing font masters.

Lorp commented 3 years ago
pos A B' <0 (wght=200:-100 wght=900:-150 wght=900,wdth=150:-120) 0 0> C' D;

Thanks for this. It is the fact that default axis locations are specified elsewhere that made your example a bit confusing. (In fact it does make sense if default wght is 900, but probably not if it’s anywhere else.) I wonder if some friendly warnings would be handy if the orthogonal grid of tuples is sparse, though missing corners can be ok.

Also note that, unless all axis locations are fully specified for each value, choosing a different master to be default means all variable positioning rules must be recalculated.

If that’s a step too far, I’d like to see the default value explicit even if 0. I would recommend that the “bare value to represent the default position” is required immediately after the initial parenthesis, thus (for the example above):

(0 wght=200:-100 wght=900:-150 wght=900,wdth=150:-120)

simoncozens commented 3 years ago

So I will start a PR that updates the Open Type Feature File Specification document and incorporate your latest suggestion for varying scalars as well as @punchcutter's proposal for feature replacement using the language and formatting of the spec

Hi Josh - any news on this? Would it be helpful if I drafted a PR myself?

josh-hadley commented 3 years ago

@simoncozens we're still working out some details on @punchcutter's feature replacement idea that I want to make sure are incorporated along with your scalar variation proposal. I hope to get that sorted soon but it could be a while still.

josh-hadley commented 3 years ago

@simoncozens and other interested folks, please see #1350