Open kainino0x opened 4 years ago
If specification text returns an out-of-range number for a getter or method, would that trigger an assert or should it be coerced?
I think that should be a spec error in any case, even if EnforceRange isn't present.
That seems fine for this case (and let's assert for it in IDL). (I think sometimes coercion or some flexibility around types makes sense, but I agree that for clearly erroneous cases it would be better to assert.)
I would strongly prefer not to allow these sorts of no-op extended attributes. They muddle the purpose of the extended attribute, which is to operate on inputs by changing how JS type conversions are done.
Well, that's still the use case, but they want to reuse that "type" for other purposes.
I don't quite understand; can you give an example of a case where you cannot remove the extended attribute when using it as a return value or const?
OP gave an example of GPUBufferUsageFlags
which seems somewhat reasonable to me. (Assuming that WebGL's rather exceptional usage of IDL is okay.)
In that case I would suggest using [EnforceRange] GPUBufferUsageFlags
for inputs, and GPUBufferUsageFlags
for outputs. This also makes the spec a lot easier to read, IMO.
In WebGPU, we want the strongest possible typing on every input integer, because any kind of coercion would be extremely confusing to developers. I don't find it easier to read if we have to pepper [EnforceRange]
on EVERY integer type throughout the entire spec, which is I proposed the typedef
s in WebGPU (linked above). Plus, it is error-prone, as it's never obvious when an [EnforceRange]
is missing - whereas, with the typedefs, any usage of a bare long
is an error.
If WebIDL won't allow [EnforceRange]
on output types, then I would like to keep the typedef
s and cherry-pick the places where we have to use undecorated integers.
Could we get text in the spec about consts/return types/anything else, and ideally more validation checks (that would run in bikeshed) for invalid usages of [EnforceRange]
?
We are trying to figure out whether we should comply with this hypothetical rule (that [EnforceRange]
shouldn't be allowed on const
s). Could we get some resolution on this in the WebIDL spec?
FWIW, I think that we (the WebGPU editors) still would prefer if typedefs containing [EnforceRange]
were allowed on all output types (readonly attribute
, const
, and return types). cc @kvark
Here's a PR where we were making the change: https://github.com/gpuweb/gpuweb/pull/928
Consider it resolved! When we have editing bandwidth we can add it to the spec proper. (A pull request would also be lovely.)
Thanks for the quick reply!
Just to clarify, we should consider it resolved that [EnforceRange]
is disallowed on const
s and return types. Is that correct?
Consider it resolved!
I can't determine from this thread which way it was resolved. Do you think you could clarify?
As an aside: Having to have two sibling typedefs, one with [EnforceRange]
and one without it, seems bad for clarity. And the alternative of having one typedef without it, but sprinkling [EnforceRange]
around the spec in various places seems even worse (because it's non-obvious when you've missed one). IMO, allowing no-op [EnforceRange]
s is valuable for type consistency and spec simplicity. Being able to put [EnforceRange]
in a typedef, and use the typedef everywhere and have [EnforceRange]
do its enforcement where it makes sense and do nothing where it doesn't, seems like the best outcome.
Just to clarify, we should consider it resolved that [EnforceRange] is disallowed on consts and return types. Is that correct?
Correct!
As an aside: Having to have two sibling typedefs, one with [EnforceRange] and one without it, seems bad for clarity.
In general I find the extensive use of typedefs in the Web GL specs pretty bad for clarity. They're working against the Web IDL type system, which assumes everyone will use a common vocabulary for things like unsigned long
, instead of typedefing primitives. So it's understandable that you'll run into trouble when working in that type of situation.
In general I find the extensive use of typedefs in the Web GL specs pretty bad for clarity.
Is there a better solution to represent bitsets?
Using the original types, e.g. unsigned long
, directly, is what I would suggest.
That breaks down when there are multiple bitsets. A function that takes an unsigned long
does not indicate which bitset thatunsigned long
represents. That's the purpose of the typedefs.
I think that's pretty misleading, because in JavaScript, there's no distinction between those different types of bitsets. You could easily write code that assigns a GPUSize32
bitset value to a GPUSampleMask
bitset property. The IDL shouldn't hide this fact behind typedefs.
Not a JS dev by any means, but I thought you can write a lot of weird code in JS. Can't you pass the same object where different WebIDL dictionary types are required, for example? Does that make it less valuable to define different dictionary types then? My understanding is that WebIDL spec defines what is expected in objects passed by the user. It can't enforce what these things are, since JS is weakly and dynamically typed. From this perspective, a typedef for an integer/flag does increase clarity and quality of a spec.
Can't you pass the same object where different WebIDL dictionary types are required, for example? Does that make it less valuable to define different dictionary types then?
If you made multiple dictionaries with the exact same members, then yes, I agree that would be just as misleading and against the grain of Web IDL.
My understanding is that WebIDL spec defines what is expected in objects passed by the user. It can't enforce what these things are, since JS is weakly and dynamically typed
I wouldn't really put it that way. In particular, Web IDL defines, and allows specs to define, conversion rules for how to turn JavaScript values into Web IDL values. Having, e.g., different dictionary types (which generate different conversion rules), or different integer types (e.g. long
vs. unsigned long
, which generate different conversion rules), is helpful.
But if you define two exact same conversion rules under different names, that's generally confusing for readers, causing them to dig through your indirection to realize what conversion rules are actually in place. Sometimes there's a brevity benefit, especially around typedefing large unions or deeply-nested generic structures, which outweighs this confusion; that's generally why typdefs exist. But they don't exist so that you can express "perform ToNumber, round to an integer, and take the value modulo 2**32" in several different ways. We have a canonical way to do that, namely unsigned long
.
To be clear, I'm not saying that we should disallow typedefing primitives, or other "simple" types, in the Web IDL spec. Where to use typdefs, and make the indirection tradeoff, is a choice that individual specification editors should make. Instead, I'm just explaining why when you attempt to use them in this way, you'll run into issues like the ones discussed in this thread.
WebIDL specifies that a
[EnforceRange]
types are not valid forreadonly attribute
s. However, it doesn't seem to specify whether they are valid for:const
s (seeGPUBufferUsageFlags
andGPUBufferUsage.*
)GPUFenceValue
)It seems like all 3 should be the same.
That said, I would argue that
EnforceRange
be valid in all of these cases, just that it would have no effect. This makes sense because: (a) it allows atypedef
of an[EnforceRange]
integer type to be used consistently and (b) it matches the behavior of reading from a non-readonlyattribute
([EnforceRange]
is meaningless/ignored).