Open tkacvins opened 2 years ago
These are the input file and output file from makeotf -f cmb10.pfa -o cmb10.otf -S
I think this conversion only really works by coincidence. The fact that A is at position 65 (which translates to x0041) will produce a range of useable characters – but many other code points will be off (for example everything before the exclamation mark).
Please consider using a GlyphOrderAndAliasDB
file to map glyph names to code points.
Also, release mode (-r
) is recommended to apply said GlyphOrderAndAliasDB file, and switch on subroutinization.
Hmmm, I wonder if it would be better if AFDKO handled custom encodings in a smarter fashion. Don't get me wrong, I like the product, but it is in the Type 1 and CFF specs to have custom encodings, so I am left wondering why AFDKO doesn't handle what is in the specifications in a better fashion. Besides, the fonts I am working with all have custom encodings, each different, some of which contain math glyphs (added to Unicode many years ago), etc. so it would be painful to make a GOADB for each font.
Okay, to clear up any possible misconceptions:
Unlike T1 fonts (PFA), OTFs don’t rely on a particular order of glyphs. Apart from the .notdef
at position 0, glyphs can be in whatever position. Also, the order of characters prescribed in Unicode has no bearing on OTF fonts.
Consequentially, it is essential that your project have a GlyphOrderAndAliasDB. I do not really follow on why creating a GlyphOrderAndAliasDB is “painful” (not much more painful than making fonts to begin with ;-) )
All that said, you might be in luck, because the glyph names mostly seem to be contained in the aglfn: these are glyph names carried over from Type1 fonts, which have an inherent chararcter/codepoint prescribed, for example A
or bracketleft
.
In the early days of OTF and Unicode, the aglfn was extended with new names, but it is an approach that simply doesn’t scale. Hence, the list has been kept as-is (without expansion). Here it is:
https://github.com/adobe-type-tools/agl-aglfn/blob/master/aglfn.txt
Consequently, your GlyphOrderAndAliasDB could look something like this (a tab-separated two column list):
.notdef .notdef
Gamma Gamma
Delta Delta
Theta Theta
Lambda Lambda
Xi Xi
Pi Pi
Sigma Sigma
Upsilon Upsilon
Phi Phi
Psi Psi
Omega Omega
...
See a modern example for a GlyphOrderAndAliasDB here: https://github.com/adobe-fonts/source-serif/blob/main/Roman/GlyphOrderAndAliasDB
Even in this simple scenario, some questions are still open:
suppress
?I hope this clears up some of your questions. I’ll update the title of this issue to reflect the nature of our conversation better.
Hi Frank,
At one point in time, when I last worked on the COmputer Modern fonts (I've since left the company when I did this work), there were no duplicates in the Type 1 font. I will touch base with the current maintainers of the Type 1 fonts to see what is going on with that.
The reason for the ligatures (ff, ffi, etc...) is that when the Computer Modern fonts were first designed, Type 1 fonts did not exist, and the machinery for compositing glyphs did not exist. These glyphs remain in place for compatibility reasons.
The Greek letters are meant for mathematics. There was a supplement to the Unicode specification to add math glyphs. I know the person responsible for the supplement, I will touch base with her to find out if adding Unicode support for the OTF flavor of the fonts should use Greek or Math/Physics.
The pain comment was just me be a little bit whiny. It is not more difficult (very likely less so) that actually designing the glyphs and hinting them.
Finally, thanks for the reference to AGLFN and an example of a GOADB. This is going to be an interesting/fun project.
Tom
I am working on making the Computer Modern fonts in Type 1 PFA format to OpenType/CFF format. One of my testers reported the /dotlessj is getting an incorrect Unicode code point. From FontForge
The PFA encoding array is such that it ends up as a supplemental encoding since it is a custom encoding
The encoding array is
/dotlessj has code point 180, so I don't know hwo that is getting translated to U+F6BE. Is there some special option to force what I was Unicode mapping tables?
Thanks,
Tom