garethsprice / libretext

LibreText generates text for typeface designers to test their typefaces with.
12 stars 6 forks source link

Generate spacing strings #4

Open davelab6 opened 12 years ago

davelab6 commented 12 years ago

Type designers check the spacing of their glyphs by seeing them in the context of rows of 'key' glyphs. The most basic are ones with flat and round sides that are symmetrical, like n/o and N/O. Here is some simple python that implements this.

LibreText should have an option to output this in a second text box; I've pushed a CSS change to make the current #wordlist 50% wide to make room for a #spacinglist div.

characters = 'HAORadhesion'

strings = [ ['nnnnnn', 'nnnnnn'], ['oooooo', 'oooooo'], ['ononon', 'ononon'], ['nonono', 'nonono'], ]

for char in characters: for pair in strings: print pair[0] + char + pair[1] if char == char.upper(): for pair in strings: print pair[0].upper() + char + pair[1].upper()

davelab6 commented 12 years ago

This is the output of that python code:

nnnnnnHnnnnnn ooooooHoooooo onononHononon nononoHnonono NNNNNNHNNNNNN OOOOOOHOOOOOO ONONONHONONON NONONOHNONONO nnnnnnAnnnnnn ooooooAoooooo onononAononon nononoAnonono NNNNNNANNNNNN OOOOOOAOOOOOO ONONONAONONON NONONOANONONO nnnnnnOnnnnnn ooooooOoooooo onononOononon nononoOnonono NNNNNNONNNNNN OOOOOOOOOOOOO ONONONOONONON NONONOONONONO nnnnnnRnnnnnn ooooooRoooooo onononRononon nononoRnonono NNNNNNRNNNNNN OOOOOOROOOOOO ONONONRONONON NONONORNONONO nnnnnnannnnnn ooooooaoooooo onononaononon nononoanonono nnnnnndnnnnnn oooooodoooooo ononondononon nononodnonono nnnnnnhnnnnnn oooooohoooooo onononhononon nononohnonono nnnnnnennnnnn ooooooeoooooo onononeononon nononoenonono nnnnnnsnnnnnn oooooosoooooo onononsononon nononosnonono nnnnnninnnnnn ooooooioooooo onononiononon nononoinonono nnnnnnonnnnnn ooooooooooooo onononoononon nononoononono nnnnnnnnnnnnn oooooonoooooo onononnononon nonononnonono

EbenSorkin commented 12 years ago

Q: Is Libre text like text edit on the mac or notepad on windows?

Q: We could also simply offer a pre-made text that includes all these variables since the numbers of glyphs that have to be tested are not infinite. Why is this not preferable? I have been working on one with Joana.

Assuming there is an overarching reason to make tests programatic:

The span we have in the above example is over 8 glyphs. We may want to make it 2-3 to the left and five to the right because this closer to the max # of glyphs and the pattern that a typical Latin reading brain takes in during a fixation after a saccade. ( arabic and hebrew readers would read more on the left ) So we might end up with something like this

nonXonono

where X is the variable letter.

Having an overly long line may also tire the tester because the letters onside of the fixation take some mental energy to ignore.

I think H would be more valuable to use than N. H is easier for a novice to make than N but crucially it is also more visually stable.

Also it might be worth making 2 tests rather than one. Perhaps one would be X to lower case, two might be X to upper case. The use would be able to build 4 tests in all by changing the case of X.

Also, I would like to see other classes of letters such as diagonals ( v y w ), open sided ( r l ), and gap producing letters such as T V v W and w . The letters f and t may also deserve their own spots because they can, depending on the design, be so tricky.

So the output for lc 's' as a variable might look like this:

nonsnonon onosonono onesanono onvsvnono onrsanono onfsfnono ontstnono oncsanono

It might be valuable to be able to have number tests which might be surrounded by other numbers or lc depending on the intent of the font. (In a text face with oldstyle numerals you would want to use lc)

with 6 as a variable:

non6nonon ono6onono one6anono onv6vnono onr6anono onf6fnono ont6tnono onc6anono

and

130602187 131613187 132623187 133633187 134643187

Similarly it might be good to have left and right sided punctuation tests like this

non. onon ono. onono one. nono onv. nono onrs. ono onf. nono ont. nono onc. nono

and this

non 'onon ono 'nono one 'anon onv 'vnon onf 'fnon ont 'tnon

these would also be useful for brackets etc.

In some ways though the most useful thing would be to have a set of tests primed specifically for the glyph.

davelab6 commented 12 years ago

"We could also simply offer a pre-made text that includes all these variables since the numbers of glyphs that have to be tested are not infinite. Why is this not preferable?"

Because the user designs a font with only a few glyphs, and adds them over time; so a single pre-made text with all glyphs in a big latin character set will have lots of glyphs rendering in a fallback font or [] notdef boxes. And the glyphs that a user designs with varies over time within a project, and that in turn varies project by project, user by user.

However, to make this software work, it would be great to have such a text and then break it down into a program that could recreate that exact text given the same input character set, and subsets of the text with a smaller set of chars.

"In some ways though the most useful thing would be to have a set of tests primed specifically for the glyph."

Right, such a pre-made text could have special cases for special glyphs, and the program could support these cases.

EbenSorkin commented 12 years ago

OK this makes sense.

I want to note that these words/texts have arguably got two different purposes. It may be worthwhile recognizing the difference and focusing on one or even serving them both depending on your goals.

Massed text testing - This familiar first purpose is to make a text that isn't to read but that does show you the texture your font has in massed text. With Tim's engine you can focus on specific kern pairs or see what impact changing the language or groups of languages has on the texture of your text too. This is sweet! Obviously massed text is very important. Taking yourself away from glyph level editing to see texture and the impact that glyph design has on texture is invaluable. Because adhesion text and Tim's engines exist and are free you could argue we don't need this but maybe having an engine built into a font editor is still worthwhile.

Glyph design - The second purpose would be to let a designer dynamically check a set of intelligently chosen proximal glyphs limited by the glyphs available in the font they are working on. This preview would ideally occur next to the glyph as it is being worked on but could also offered in a special preview window - or maybe both. In Fontlab you have groups and neighbors you can turn on and lists you can edit to suit your own process. I want something more intelligent than this. For example : suppose I am working on a lower case f, in an intelligent font editor I would be automatically shown the glyphs f may clash with like i, ì (igrave), ï ( I dieresis) l h b k and so on. Punctuation is also relevant for example f” f}. Ideally speaking a set of letters likely to need kerning might also be shown too perhaps in a different mode.The point of having this and not just relying on a massed text engine is to

-> Find and nip errors in the bud before you get to questions of texture

-> Make the process of glyph design more efficient. It is inefficient to try to rapidly find and check specific glyph combinations in a disordered text block.

The preview text I use now in Fontlab is designed to help me with this problem. If I notice i want to check something and it isn't in my text, I add it to my preview text.

I also have a print text I use to test. These two texts are very related but not identical. The print test assumes a relatively complete font. It is also structured so I can print just the first few pages for less complete fonts. Both are definitely works in progress.

I should also not that if you are making a joined style font there are a new set of things to watch out for but that is another level up.

EbenSorkin commented 12 years ago

Also as I mentioned before I have been building a new structured text with Joana using a series of language corpus' containing the most common words in many (but not all) of the languages that use Latin letters.

The text is structured to offer instances of words that contain a target letter and then the variable to the right. e.g. c+a c+b c+c If you want the target letter say "c" and then the variable letter "a" on the left you go to the section containing "a".

It will also incorporate some of the recognition of exceptional letters problems like those of f.

If you would like a copy of this let me know.

octaviopardo commented 12 years ago

Honestly I would forget about the saccade thing that Eben mentioned. The main goal in spacing matters is to create balance between whites and blacks, so the best way, in my opinion, is to do it over a balanced compsition. I don t really see the point of bringing the saccades thing into this, since it is not even a regular situation in reading. In a text there are many many more letters with less than 5 letters on the right than letters that do have 5 letters on the right (logically more than 5 times more). Why using that context for spacing??

I think this is more than enough for a 5 days course. For after the course I will suggest other features like east european pangrams using weirdo accents and stuff like that to make the libre text the total tool for spacing :) but the strings that Dave suggested seem to me perfect right now.

If I have time this week I ll send you stuff for this.

EbenSorkin commented 12 years ago

The point of talking about fixations is that it is possible to make tests that are more or less tiring for the eye. I suspect that putting the variable letter too deep in a string of letters will make the test more tiring or fatiguing to use. Read some Pelli!

http://psych.nyu.edu/pelli/

Still, something is better than nothing and if the program can be altered ( it can ) then people can roll their own to suit themselves.

octaviopardo commented 12 years ago

humm I understand this but tests are not reading, and the eye doesn t read in a spacing test, it looks, right?

Anyway, is a minor thing and we have already talk too much about it

Escoge un trabajo que ames y no tendrás que volver a trabajar un solo día en tu vida.

www.oandcompany.es

EbenSorkin commented 12 years ago

I agree they are fundamentally different in many ways. You are absolutely correct about that.

But the one way in which they are the same is that we are recognizing glyphs and this is where the effects of this psychological phenomenon of 'crowding' that Pelli talks about remain relevant. They are common to all recognition related tasks not just reading.

Also the center of a typical fixation is about 3 to 4 glyphs in from the start of a word. I just thought it would be useful to have the shoe fit the foot so to speak.

Anyway I will drop this now unless Dave wants to take it further.

If either of you want the papers that talk about these patterns/phenomena let me know.