Open davelab6 opened 3 years ago
Today we had a video call in which we discussed the structure of the Unified Font Repo and when we were talking about the FontBakery report and discussing ways to deal with FAILs that are silenced by disabling specific checks, the fb config file was brought up again as a potential solution.
Then I once more suggested that when we do so, it should be mandatory to provide at least a statement (a single sentence) explaining the reason why the designer decided to skip each check, so that we have it properly documented.
I believe that your proposal here in this issue would likely use a similar mechanism.
Here's a somewhat crazy idea that might actually make a lot of sense.
What if we come up with a CreativeCommons-like system composed of a set of mnemonic badges corresponding to those subjective quality aspects of a font project and then we created mechanisms for those badges to be placed on the README.md
of a repo, or in a website hosting the font files.
Such badges would have corresponding human-readable text as well as a machine-readable (read: fontbakery can easily read it) metadata file format.
To mitigate the risk of someone gaming the system by simply applying all the badges without actually reviewing the font design, we should make it mandatory that badges are "signed" by simply providing the name of the reviewer. I am not implying any sort of crypto-signing here, but merely trusting what people say, the same way we trust their usernames/real names/pseudonyms, for copyright and licensing matters, when they contribute to a free software project.
Members of the type community would taint their reputation if they were caught excessively overstating the qualities of a design to game the system. And faking signatures (labeling a design review with someone else's name without their consent) would pretty much be identity fraud.
It should be also easy to declare these subjective font quality scores. So we might create a website where a reviewer could answer a set of questions and then the website would generate a metadata file that can be downloaded and saved on the root of a font project directory/repo. Badges would be automatically rendered on README.md based on the contents of that file and provide links to the more verbose human-readable description of the review, which could even showcase the reviewer info if available on the Google Fonts Designers Catalog.
All of these things are completely analogous to the Creative Commons framework.
Oh! One last thing: even though I mentioned the Google Fonts Designers Catalog here, I think this seems generic enough that it should be a system that could be used on any font project, not meant to be exclusive only for Google-hosted fonts.
Nice ideas!!! :)
Could maybe use https://github.com/SorkinType/EQX as a starting point for the design review website.
Okay, so let's combine a number of thoughts we've been having recently:
Here's an idea, then:
I suppose that a single profile may have multiple badges. As discussed at #3336, we never really extensively used Sections, but maybe that's their fate :-) A badge for each section of a profile.
But badges could also perhaps cross the profile boundary because they seem more likely to be defined by topic. For instance, on the googlefonts
profile I could pretty well see a vendor-specific badge for METADATA.pb-related checks, while the rest of the checks would probably add to the scores of a badge that also takes into account checks from universal
or opentype
profiles. A "Variable Fonts" badge would be an example (containing both universal as well as vendor specific checks).
When thinking of badges, I picture in my mind things like the "Achievement Unlocked!" sort of goals one often see in video-games. These often have:
Grouping existing checks on those "Sections" was already a long-postponed goal and I had not put much energy on that because it was unclear for me if doing it would have any real value. I have been really planning to completely deprecate Sections to shave complexity out of the code-base unless we actually used them extensively.
Note: Sections might not be best way to implement this if we decide to give Badges the ability to group checks across profiles.
It might be a nice brainstorming exercise for us to try to come up with a small set of initial badge proposals in that spirit and then decide if it really makes sense to go forward.
I'll keep the word "subjective" in the title of this issue for now because that's what @davelab6 initially suggested us to address here, even though this conversation is clearly evolving to also embrace the existing technical checks.
We need to define how we'll store these "subjective quality scores" in a way that is user-friendly.
"Achievements" would be nice, but there's already a kind of de-facto standard for build/QA badges:
YES! But I think the idea would be to have a larger set of badges such as:
We would have to draft what else makes sense. Maybe based on the original article that @davelab6 pointed out when opening the issue.
Just have one per section. This is what sections are for. :-)
@twardoch, @RosaWagner, @moyogo, @vv-monsalve, @eliheuer, or anyone else interested:
I'd be glad to have some input from you on this conversation
I like it!
The only thing I would add, as the article Dave links in the first post says in the conclusion:
And then there are, of course, typefaces that are intentionally designed to be weird, wonky, imperfect, distressed, uneven, casual or handmade etc. They are not meant to be evaluated by the same set of technical criteria, but you get the idea.
Maybe a way to indicate fonts with artistic and/or cultural significance is needed so you know the standard subjective font quality scores can be ignored for specific typefaces?
I celebrate and 100% support the spirit of raising the font quality standards/requirements for the library.
However, it is only possible to evaluate something based on a frame of reference or a standard for a certain context or purpose. So the first thing that we would need to set would be those parameters for Google Fonts. What does a good quality font mean to GF?. Currently, we still do not have clear standards around topics such as outline quality or spacing. We have published fonts with different quality ranges around this.
Only after clearly defining our terms and priorities, we could elaborate something similar to the rubrics used in education. This is a scheme that relates a topic (badge or section), with the goal, and the small tasks to achieve it, each one with a percentage of importance to achieve the whole. Finally, to also determine a minimum value or development percentage required to be considered sufficient, in this case, for a font to be published in GF.
(% relevance for the total assessment) –Goal–
And so on.
How big topics or sections could be and what should be addressed is what we should build and what will give shape to the 'rubric'
Some sections could be (including the bullets listed in the article)
Outline quality Based on Outline quality checklist drafted by Rosa
Readability Spacing Vertical metrics Similar detailing across characters Optical compensations Related proportions of related characters
Language support Completeness of character-set (GF encodings) Well-designed diacritical characters that meet standards of native speakers Fitting design of auxiliary characters like punctuation, numerals, currency symbols etc. (not just copied over from other fonts)
Font tables compliance name OS/2 fvar Stat
etc
A font could be ranked according to development status, perhaps only in percentage terms to avoid subjective quality values such as good
or bad
and prevent endless debates out of personal or cultural perceptions (as the open discussion around FAILS
).
So finally, to be able to publish it, the font project should be compliant with at least some development percentages in each badge. And this could have different values according to the nature of the font project. The minimum readability value required for a Display-Decorative font could be downgraded to 50%, while a text font should have at least a 70% E.g.
Could be published in GF! 🧁
I was chatting with @vv-monsalve today about 2022 plans and she mentioned this issue could be a good goal to work on collectively, and I agree. What should we do about this in 2022?
Returning to this after a discussion yesterday: @Lorp suggested that Fontbakery could be used for project management. The idea would be that a JSON file in the font repo could generate checks and their statuses. For example with humanchecks.json
:
[
{ "task": "Get Gerry for review", "started": "2024-06-17Z15:33:04" },
{ "task": "Check all kerns", "started": "2024-08-18Z09:03:22" },
]
Fontbakery would return:
project_management/humanchecks:
FAIL: Get Gerry to review
FAIL : Check all kerns
And then later when humanchecks.json
becomes:
[
{ "task": "Get Gerry for review", "started": "2024-06-17Z15:33:04" , "finished": "2024-10-10Z02:52:12", "comments": "Gerry thinks it's fine"}
{ "task": "Check all kerns", "started": "2024-08-18Z09:03:22", "finished": "2024-09-01Z12:44:02", "comments": "It's OK"},
]
Fontbakery will return:
project_management/humanchecks:
PASS: Gerry thinks it's fine
WARN : Check all kerns: Font file has been updated since this task was last completed.
https://fontstand.com/news/knowledge/evaluating-the-quality-of-a-typeface/ has a nice list of things to consider a font family "good quality", and many are unlikely to be easy to implement as font bakery checks.
However, along with the project's fb config, it should be possible to have a profile of checks for these things, which simply checks that config file for attestations that these things have been done.
This is a bit vague, so I'm assigning it to Felipe and also Simon, Denis and Adam, to think about and get back with a more concrete plan in the coming weeks :)