Closed mgiuca closed 3 years ago
@marcoscaceres What do you think?
Agree. This would be nice.
I am totally pro this, I have ran into this problem myself a couple of times
Perhaps we could make a global rule that Manifest strings are trimmed, then we wouldn't really need any processing steps at all.
We could also not trim and be more strict.
I don't think that having a few extra line per entry is that much of a deal, editorial wise
I did a stab at this https://github.com/w3c/manifest/pull/612 (defining the IDL)
Would be great with some initial comments from @marcoscaceres to know if I am doing this right before continuing :)
Then after landing that change we can simplify the prose.
Guess I need to add [NoInterfaceObject] to all of these right?
Pasting what I wrote in the PR, so we can try to reconcile this...
So... although we can define processing in IDL, we need to be careful if we are going to expose these on the web. In particular, “foo_bar” property names would make this API inconsistent with the rest of the platform.
It’s a bit similar (but obviously not the same) as CSS in JS, where “foo-bar” becomes “fooBar”. See casing rules https://w3ctag.github.io/design-principles/#casing-rules
I don't think we actually need to expose the types, these are mostly for being able to easily refer to them and use them potentially in other specs. That is why I was originally using [NoInterfaceObject] but I guess I understood that wrong.
So this is a tough call. My understanding is this won't actually ever be exposed as such, right? If so, it seems OK, but others more knowledgeable than I might disagree.
wrt to the USVString vs. DOMString issue, again, I'm of little help. Sorry. Clarifying the use cases in WebIDL is on my plate and I should get to it shortly, but until then, you probably want to ask @annevk.
No the idea is not to expose these. There is no way for web devs to obtain the manifest info without fetching it manually, in which they will get JSON back.
The IDL helps defining the spec in a less verbose way (we can simplify algorithms), and it gets types we can refer to in incubation specs or documents before integrating new features in the spec itself. It also makes the spec more structured and gives better overview (IDL snippets and IDL index).
No the idea is not to expose these. There is no way for web devs to obtain the manifest info without fetching it manually, in which they will get JSON back.
Maybe add a pretty large advisement about that?
What exactly do you get from IDL and what would this look like? As far as I can tell what you need to be doing is UTF-8 decode, followed by JSON parse, followed by operating on the resulting JSON value. What am I missing?
I am fine with doing that. Any suggestion on the best way to do that?
"This spec uses Web IDL to define the structure and processing model of the JSON document, which is internal to the User Agent, and thus not exposed to developers directly"
@annevk My PR is here: https://github.com/w3c/manifest/pull/613/files
Note there is a preview here. In particular, the WebAppManifest dictionary.
So is the issue just the naming discrepancy between the manifest fields and normal web attributes ("short_name
" vs "shortName
")? Or is it more fundamentally that we shouldn't create an IDL that isn't actually exposed on the web?
What exactly do you get from IDL and what would this look like?
The reason for this is mostly for readability. It's becoming increasingly harder to read the manifest spec because there are so many fields. I would like to be able to see at a glance what all the fields are and their types. WebIDL is a convenient language to describe it succinctly instead of loads of prose algorithms.
Another advantage is that an implementation could potentially just take the IDL, parse it, and generate manifest parser code. Obviously we can already do that today (but we don't do this today in Chrome), but it would make sense to have "official" IDL for the manifest, rather than implementations essentially writing their own IDL to match the prose text in this document.
Having said that, we do have to be careful because we might want to expose some of these in JavaScript APIs. The original motivation was Ken's shortcuts proposal, which has a ShortcutInfo
dictionary that would be both the type of a field in the manifest and the type of parameters to a JavaScript API. That ShortcutInfo
dictionary itself has an icons
field which has the same type as icons
in the manifest (so that means the dictionary Ken calls ImageResource
in his PR would also be exposed to JavaScript APIs). And Google's proposed getInstalledRelatedApps API would also return instances of the related_applications
manifest member (which Ken calls ExternalApplicationResource
in his PR).
So I think in the future, there will be some of these dictionaries exposed to web APIs (and that is part of the motivation behind this change). Questions for the community:
So is the issue just the naming discrepancy between the manifest fields and normal web attributes ("short_name" vs "shortName")? Or is it more fundamentally that we shouldn't create an IDL that isn't actually exposed on the web?
It's mostly the naming.
There are advantages (and nice security properties) to having the manifest described in IDL, even if only passed internally. We've had a bug open about this for a couple of years in Gecko: https://bugzilla.mozilla.org/show_bug.cgi?id=1176442
- Is it inappropriate to have underscores in dictionary fields that are exposed on the web?
IMO, yes. This violates rules in tools like ESLint and makes for inconsistent coding styles. It's not very idiomatic to have underscores in name in JS (or on the web).
Having said that... it's not like anything would actually break.
If so, if we expose manifest members to JavaScript APIs, will we need to expose them under camelCase names instead?
We don't have to ... but we may choose to.
If so, should we specify the manifest version and the JS version as two different IDL dictionaries, or have some kind of automated renaming mechanism like Marcos mentioned CSS does.
I don't know how the magic mapping works for CSS, but I'm sure @annevk does (as he worked on CSSOM for a few years)... but there is also data-foo-bar
attribute in HTML that undergo similar conversion: where that becomes dataset.fooBar
.
Oh, I'd just add that we shouldn't assume the dictionaries we define in the IDL will necessarily end up being dictionaries in an API... some may be generic and useful enough to become proper interfaces. We don't know yet, however. We will need to work that out as we go.
If you just want an overview of all JSON members and their corresponding types, have you considered creating a table?
If you actually want to use IDL, how exactly would that work? You would first convert bytes to code points. Then code points to JSON values. Then JSON values to ECMAScript values and then feed that to the dictionary parser to get an IDL dictionary?
I don't see any such processing model defined at the URL you pointed to.
If you just want an overview of all JSON members and their corresponding types, have you considered creating a table?
That would be preferable to what we have now, but not convey as much information as IDL. A couple of points:
If you actually want to use IDL, how exactly would that work? You would first convert bytes to code points. Then code points to JSON values. Then JSON values to ECMAScript values and then feed that to the dictionary parser to get an IDL dictionary?
Yes, I think so. Implementations could cut out some of those steps but I think that's how the spec would go.
I don't see any such processing model defined at the URL you pointed to.
Not yet. Kenneth just wanted to capture the IDL initially, and in a follow-up work on rewriting the parsing logic to use the IDL primitives instead of manually parsing. What is your overall opinion on whether this is worth it?
I'll note that there are a few breaking changes if we do this:
We should check if the above would break a lot of web content. I doubt it will.
FWIW, I think it's worth doing this - if only because it gets rid of so much redundancy. But then again... I'm not sure how we reconcile the JSON naming vs DOM API naming issues.
What is your overall opinion on whether this is worth it?
It seems reasonable as long as it's well-defined. And as long as you stick to dictionaries there's not much room for collateral damage. If we get more consumers (e.g., Origin Policy would be a good candidate) we should maybe uplift this to IDL somehow.
I'm not sure how we reconcile the JSON naming vs DOM API naming issues.
Nothing blocks you from using the JSON names, but I'm also not sure we actually want to use that naming convention for all JSON... Certainly reporting infrastructure uses different names and I hope Origin Policy would too.
FWIW, I think it's worth doing this - if only because it gets rid of so much redundancy. But then again... I'm not sure how we reconcile the JSON naming vs DOM API naming issues.
We could add JSON bindings (I'm only half joking).
We could add JSON bindings (I'm only half joking).
Let's play this out as a straw person... we might need it to do the conversion dance that Anne described in https://github.com/w3c/manifest/issues/611#issuecomment-332452821 ... what are you thinking @tobie?
Yes I was discussing some of that with @tobie yesterday. I might give it a shot today, but will probably need some help :-)
I guess the other thing I wasn't sure of is whether you actually need all the IDL processing here. Why would we convert numbers to strings for instance if we want strings? There's far less types in JSON and it's much easier to deal with JSON values than arbitrary JavaScript input. So where does all the complexity you allude to come from?
It does make sense for JSON to be consistently processed though, so something like IDL makes sense, but I'm not sure we'd want the same kind of ToString() all the things behavior and such.
@annevk agree. Thankfully there are not too many of those... @kenchris, can you evaluate how much breakage there will be if we make better use of the types?
We we are to use parts of the manifest format for other web APIs, like say a "addShortcut({ // using same icon format here }) then we would be dealing with arbitrary JS
At that point @marcoscaceres's concern about naming becomes a real issue (which as I said you might want to revisit given precedent in reports and such).
Let's play this out as a straw person... we might need it to do the conversion dance that Anne described in #611 (comment) ... what are you thinking @tobie?
So @marcoscaceres, currently, if we want to turn JSON into WebIDL, we have to do JSON.parse(json) -> ES -> WebIDL. We could instead describe bindings for each of JSON 7(?) types, including camelCasing snake_case object properties. This could even nicely mirror the toJSON stuff we recently added.
That said, it's only really worth doing if there are other parts of the platform that would benefit.
Could allowing both forms in the JSON be an option? ie. specify that _ turns the following character into uppercase?
@tobie like this? https://w3c.github.io/payment-method-manifest/
@kenchris right, though I have no idea whether WebIDL bindings would help, here.
the toJSON being discussed: https://heycam.github.io/webidl/#idl-tojson-operation
This is done, right?
No, see https://github.com/whatwg/infra/issues/159. We haven't really agreed upon a shared processing model for all JSON formats in the web platform, as far as I can tell.
I discussed with @dominickng and we have a vague plan for this.
We do want to keep defining the manifest in terms of IDL. It's just so much more readable than explaining the types of all the fields by algorithms that check the types and reject if they don't match. I recall @marcoscaceres and @annevk had a few issues with this but the main issue is that we have no way of defining how much of the manifest to invalidate when there is a type error in some member (technically speaking, due to IDL, we should be invalidating the entire manifest if there is any type error, but that isn't our intention and I don't think any browser does this).
This is becoming an actual problem for us; see this Web Share Target code review on Chromium, where we have to unilaterally decide how much of the share target member to invalidate when an accept
is invalid, because we don't want to invalidate the entire manifest, as the spec implicitly tells us to do.
The proposed solution is this:
[CatchTypeErrors]
(name can be debated, my personal preference is [TheBuckStopsHere]
😜). (There are a lot of these already, maybe there's already one that does this.)[CatchTypeErrors]
as "If during the conversion of the inner type to IDL, an exception was thrown of type TypeError, let the result be undefined." Essentially, this means if you put [CatchTypeErrors]
on an optional dictionary member, any type errors inside that member will cause the member to take its default value, rather than propagating the TypeError upwards.[CatchTypeErrors]
on most of the top-level members, so if any of them are invalid, they don't break the whole manifest. In the case of some more complex members, [CatchTypeErrors]
may be put on the sub-members instead.The behaviour I want (but haven't thought of how to define yet) is for sequence types, if you put [CatchTypeErrors]
on the type of the sequence elements, any failing element simply drops the element from the resulting list, not putting undefined
in the list. That's pretty key since I'd want to define icons
as sequence<[CatchTypeErrors] ImageResource>
, so if any icon is invalid, we drop that icon rather than dropping the whole icons
member.
Does this sound like a reasonable approach, @marcoscaceres @annevk ?
I went ahead and specced this extended type attribute in WebIDL to see what it would be like. heycam/webidl#597
See https://pr-preview.s3.amazonaws.com/mgiuca/webidl/pull/597.html#CatchTypeErrors
So the plan would still be that, e.g., you could do "short_name": { "foo: "bar" }
and get a short name of "[object Object]"
? That's what Web IDL implies at least...
@domenic Yeah I guess so. 😕 I'm not trying to fix the bad but valid conversions. I'm trying to stop the entire manifest from being invalidated if any element actually throws a type error.
I'm not sure if the previous (pre-WebIDL) algorithms applied string conversions or if they just more strictly checked for the exact string type. If so, maybe there is a case to having that strictness also (which maybe warrants another extended attribute, for string values, to say "this actually has to be a string, not a random object that gets stringified".
I'd always assumed we would do actual type checks; that's my current plan for import maps. No spec yet but I did write up a reference implementation which is pretty easy to translate into spec-ese.
@domenic That sounds great :) but it seems like a separate feature than catching TypeErrors
.
Sure, it's just unclear whether you're using Web IDL at that point, or just something that shares the syntax with completely different semantics.
Hmm. I mean, it sure is useful to be able to reuse the syntax and all of the semantics except for 2 type conversion rules that we want to be less strict (in one case) and more strict (in another case).
Arguably these rule changes could be useful in more places than just the manifest spec. Like, we have extended attributes to change how other fields are processed (e.g., [Clamp]
, [EnforceRange]
) which seem to be defined because the old way was too lax, but we can't change the default. It seems that [StrictString]
for strings would be no stranger than [EnforceRange]
for integers. So I wouldn't say it's a different language, just a slightly different requirement for how to process some types. And that's exactly what extended IDL attributes are for.
Update: You can see what the old (pre-WebIDL) behaviour used to be by checking out e5536520. Looks like all the string properties were defined like this:
- Let value be the result of calling the [[GetOwnProperty]] internal method of manifest with argument "name".
- If Type(value) is not "string": a. If Type(value) is not "undefined", issue a developer warning that the type is not supported. b. Return undefined.
- Otherwise, Trim(value) and return the result.
(I think we deliberately got rid of the Trim behaviour at the same time.) I think that is strict about it being a string.
Well, I think it's not quite that amount of reuse. You're only using a small subset of Web IDL, i.e. the dictionaries/basic data types/sequences subset, ignoring the large swathes of the spec devoted to defining interfaces, global variables, methods, class hierarchies, etc. And the semantic content of that subset consists entirely of type coercion rules. But, you're not actually using any of the type coercion rules; you're instead using other rules that do not share very much of the logic at all. And you're using them on a very different set of inputs, since the output space of JSON.parse is a small subset of the total set of JS types, so whatever logic is left over, is not a great match.
As an example, on a page where someone runs Object.prototype.short_name = "foo"
, using Web IDL would imply that parsing {}
gives back a short name of "foo"
, not undefined
. Again, you could add yet another extended attribute to fix that, but...
I understand the motivation of wanting a readable, declarative syntax. I just think Web IDL, subsetted to a small piece of its semantics, with extended attributes to change all remaining behaviors of that subset, is ... not great. Maybe it's better than inventing your own thing, or reusing something like JSON schema. Indeed, maybe it's the best choice. But it's still not great.
I think it would be better if we started with the high-level goals for this syntax and then see what kind of schema language would be the most appropriate fit.
Things I'm unclear on:
Are there other requirements here?
I've become increasingly uncomfortable with using WebIDL to spec the manifest format. I think it seemed like a great idea at the time, but for the reasons @domenic mentioned, and the issues we've hit in practice, we need to seriously re-evaluate what we have done here. I'm also super worried that other groups have started imitating us, and those groups have far less experience with WebIDL than we do (they are going to get themselves into a world of hurt).
As @annevk points out, we have a few options. When we started this project, I deliberately chose not to use any schema language because we'd had such a terrible experience with XML/XML Schema, XHTML+DTDs, RDF etc. that it seemed pointless to define yet another schema language. Rather, it seemed like just writing our own processing rules using prose would make more sense - even if it turned out to be more long winded.
In implementation, the parsing rules were supposed to be consistent. Consider, Gecko's "ValueExtractor" that pulls any value from the manifest using the same algorithm (19 lines of code).
And each one of our members gets extracted consistently too, for example, each rule looks like this:
function processOrientationMember() {
const spec = {
objectName: 'manifest',
object: rawManifest,
property: 'orientation',
expectedType: 'string',
trim: true
};
const value = extractor.extractValue(spec);
if (value && typeof value === "string" && this.orientationTypes.has(value.toLowerCase())) {
return value.toLowerCase();
}
return undefined;
}
We could go back and generalize the processing rules to make something simple and consistent like the above - which handles special case processing for particular members (basically @annevk's requirements). Or we could look at Chrome and WebKit's implementation, to see if either team has come up with something clever, concise, and consistent.
If we end up with a generalized model for the Web, then that's great (and we can push it to Infra or make a new spec)... but I don't know if that should be a goal just yet.
I think it's worth looking at schema languages for JSON, but I suspect we need a custom one, since we want forward compatibility. I.e., ignore optional fields that do not match the schema and ignore fields outside the scope of the schema.
If we need something custom, taking inspiration from dictionaries in IDL does seem appropriate, something nearly identical might even be acceptable, but I'm not sure it's worth trying to create a shared abstraction there as the goals are rather different and it'll be a hassle to maintain for either side.
I guess if I were to work on this I'd copy the IDL dictionary work and simplify it to fit the needs of what we need for all these JSON resources.
JSON Schema is forward compatible in that regard.
@tobie wrote:
JSON Schema is forward compatible in that regard.
As an implementer, I want to avoid implementing JSON Schema from scratch if I can avoid it. Unless we agree that that is what we are going to do for a substantial number of specs.
@annevk wrote:
I guess if I were to work on this I'd copy the IDL dictionary work and simplify it to fit the needs of what we need for all these JSON resources.
That might be reasonable. We have a fairly good idea what kinds of behaviors the processor should exhibit. From IDL dictionaries, I think we are only missing trimming and normalization behavior.
There isn't really a formal specification of the manifest data structure. Instead, each member has its own section which describes the format of that member in prose. This makes it hard to discuss (e.g., in Ken's shortcut proposal, we'd like to be able to say "
sequence<IconInfo> icons;
" but there's no name for "the dictionary that describes a Manifest icon" because it's described in prose).I think we should add a WebIDL section that contains a dictionary definition for the full manifest, as a first step (I can do it).
As a second step, I think this would allow us to simplify the language in each individual member. The "steps for processing a manifest" algorithm could be replaced with "Parse the JSON into an IDL dictionary value, then convert it into an ECMAScript object" (which automatically applies all of the IDL type conversion rules).
Then, for example, this text:
could be replaced by this (assuming
short_name
is aDOMString
):Were it not for the call to Trim, this would make almost all the processing steps trivial. Unfortunately, almost all of them do Trim their strings, which makes things more complicated. Perhaps we could make a global rule that Manifest strings are trimmed, then we wouldn't really need any processing steps at all.