Just some personal opinion, not the views of my employer or anyone else I may be associated with:
Adobe Max has a series of "woo" lightning talks called Sneak Peaks that demo what their r&d folks can do with machine learning to make design jobs... what a capitalist might call... 'more productive", and there's naturally a font one.
It looks like it's mainly "style transfer" a la pix2pix (https://affinelayer.com/pixsrv): you draw 1 glyph - in color, as a bitmap, or as a vector - and can be extrapolated into a whole typeface, by using auto trace, auto space, auto kern, auto build, auto install stuff. The final demo even throws in some Augmented Reality live video image replacement code to do it all in apparent real time for good measure.
The first demo, where a sans serif font, presumably owned by Adobe, is converted to vector outlines on the artboard, has holes punched in one glyph like Swiss cheese, and then can be drag and dropped back onto the text element to reapply the hole punching via style transfer to the text element, where arbitrary new text can be typed with the derivative typeface, poses a kind of trolley problem for font licensing (https://en.wikipedia.org/wiki/Trolley_problem). I wonder how the OFL will play out there, since derivatives must remain OFL.
The second demo, where a phone camera snap of one of those ubiquitous and iconic sign painted trucks of India, is Fontphorified into a working font just fast enough you can see glyph being generated, will surely be instantaneous soon enough. I guess even today if one used the technology used to make Xbox games work on your phone. But I guess in the USA where letterform designs are completely public domain, there's no licensing issue at all with that.
I wonder how good the results are when you point your Fontphoria iPad at the American Type Founders specimen books. And what this would look like as a font format.
But my guess is that this will not be forming the basis of a CFF3 proposal any time soon, though. I didn't look too closely but I believe Adobe isn't actually announcing these Sneak Peaks as features shipping in Creative Cloud next month. It's more a hint of what can be done with state of the art computer graphics software engineering. I suspect a lot of it is what a savvy developer can do in a few weeks with the right idea and the libre licensed machine learning stuff lying around GitHub.
As they say, the future is already here, it just isn't evenly distributed yet.
http://typedrawers.com/discussion/2890/a-spectre-haunts-photoshop-adobe-fontphoria/p1?new=1
A Spectre Haunts Photoshop: Adobe Fontphoria
Just some personal opinion, not the views of my employer or anyone else I may be associated with:
Adobe Max has a series of "woo" lightning talks called Sneak Peaks that demo what their r&d folks can do with machine learning to make design jobs... what a capitalist might call... 'more productive", and there's naturally a font one.
No wooing like in 2015 (https://www.youtube.com/watch?v=5eJ3IXYcw3M0) but it's very impressive. Congratulations to the folks who worked on it!
https://youtu.be/eTK7bmTM7mU
It looks like it's mainly "style transfer" a la pix2pix (https://affinelayer.com/pixsrv): you draw 1 glyph - in color, as a bitmap, or as a vector - and can be extrapolated into a whole typeface, by using auto trace, auto space, auto kern, auto build, auto install stuff. The final demo even throws in some Augmented Reality live video image replacement code to do it all in apparent real time for good measure.
The first demo, where a sans serif font, presumably owned by Adobe, is converted to vector outlines on the artboard, has holes punched in one glyph like Swiss cheese, and then can be drag and dropped back onto the text element to reapply the hole punching via style transfer to the text element, where arbitrary new text can be typed with the derivative typeface, poses a kind of trolley problem for font licensing (https://en.wikipedia.org/wiki/Trolley_problem). I wonder how the OFL will play out there, since derivatives must remain OFL.
The second demo, where a phone camera snap of one of those ubiquitous and iconic sign painted trucks of India, is Fontphorified into a working font just fast enough you can see glyph being generated, will surely be instantaneous soon enough. I guess even today if one used the technology used to make Xbox games work on your phone. But I guess in the USA where letterform designs are completely public domain, there's no licensing issue at all with that.
I wonder how good the results are when you point your Fontphoria iPad at the American Type Founders specimen books. And what this would look like as a font format.
But my guess is that this will not be forming the basis of a CFF3 proposal any time soon, though. I didn't look too closely but I believe Adobe isn't actually announcing these Sneak Peaks as features shipping in Creative Cloud next month. It's more a hint of what can be done with state of the art computer graphics software engineering. I suspect a lot of it is what a savvy developer can do in a few weeks with the right idea and the libre licensed machine learning stuff lying around GitHub.
As they say, the future is already here, it just isn't evenly distributed yet.
Interesting times!