Open leafpetersen opened 6 years ago
cc @munificent @rakudrama @vsmenon @efortuna @natebosch @matanlurey @nex3 for comment.
cc @emshack fyi
To make sure I understand correctly what does it accomplish to cast to int
or String
when using Keyed
. Does it throw in the call to .cast(json)
if the field is not of the right type?
const typed = cast.Keyed<String, dynamic>({
"id": cast.int,
});
Keyed
(other naming suggestions welcome) defines a "dependent" map schema. A normal MapCast
has a single cast which it applies to all of the values in the input map range. A Keyed
map applies different casts to different values in the range of the input map, controlled by the key. So
const typed = cast.Keyed<String, dynamic>({
"id": cast.int,
});
Is a cast which yields a Map<String, dynamic>
, all of the keys of which are "id"
and all of the values of which are integers. I think the point of your question is around the connection between the type arguments to Keyed
and the casts in the range of the key map?
You could have written the above as:
const typed = cast.Keyed<String, int>({
"id": cast.int,
});
In which case applying it would give you a map with same entries, but with reified type Map<String, int>
.
However, incompatible types/entries are not allowed:
const typed = cast.Keyed<String, String>({
"id": cast.int, // Static error, since Cast<int> is not a subtype of Cast<String>
});
Basically, the second type parameter to Keyed
should be a union type: the union of all of the types in the range of the argument map. Usually that will mean dynamic
.
So for example, this schema describes maps which can have integer "id" fields and double "size" fields:
const typed = cast.Keyed<String, num>({
"id": cast.int,
"size": cast.double
});
It will produce a Map<String, num>
.
This is a schema which describes maps which can have the same fields as the above, or a String
name field:
const typed = cast.Keyed<String, dynamic>({
"id": cast.int,
"size": cast.double,
"name": cast.String,
});
Yeah I understood the way different keys can have different types, what I am wondering is exactly what value we get out of specifying fields that don't need any special handling.
Compare:
const typed = cast.Keyed<String, dynamic>({
"id": cast.int,
"name": cast.String,
"age": cast.int,
"size": cast.double,
"extra": cast.List(cast.String),
});
var result = typed.cast(data);
var id = result["id"] as int;
var name = result["name"] as String;
var extra = result["extra"] as List<String>;
with
const typed = cast.Keyed<String, dynamic>({
"extra": cast.List(cast.String),
});
var result = typed.cast(data);
var id = result["id"] as int;
var name = result["name"] as String;
var extra = result["extra"] as List<String>;
The first version has a lot of extra lines specifying the cast and there is no difference at the place where we read values. int
and String
always already had the correct type and would be fine with an as
.
So, are we required to specify all fields we might see? And are we doing so in order for the exception to get thrown during the call to .cast(data)
rather than the call to result["id"] as int;
?
Fair point. I think there's value in being able to lock down the schema so that you catch things with unexpected fields, but there's no reason not to allow a "default" cast which gets applied to any fields not found in the map. Then the question is, which should be the default behavior - allow extra fields (with default cast of cast.any) or lock down the fields?
Filed #5 and #6
Nice!
Perhaps simple add:
String jsonData = ...;
var result = typed.parse(jsonData);
if that allows us to save memory allocations when there is native JSON support. E.g., invoke V8's JSON.parse
and destructively change the resulting object to a typed Dart one.
Does this approach eventually end up at JSON Schema? Is the purpose to fail at parse-time instead of failing when accessed later? (Is that even feasible for, say, a large list?) Is the additional value over List.cast<E>
and Map.cast<K,V>
that you can more deeply describe the schema?
It appears one has to define types and names in at least 3 different places. In two of those places, the names are stringly-typed. This was an approach in Objective-C (except a type defined its own schema as a static 'property'). It was OK, but not ideal.
I can't help but think this is a bandaid on an underlying language problem. Types are getting tighter, but the language tools to specify types aren't catching up.
Does this approach eventually end up at JSON Schema?
I don't think so, but I could be wrong. The initial goal is to provide a way to do deep casts easily. That is, in Dart 1, you could take a large blob of full untyped data and simply treat it as typed, and it would just work.
List<Map<int, List<String>>> l = blob_of_untyped_data;
You can't do that in Dart 2, and we see this causing a lot of user confusion. My goal was to try to provide similar convenience. So now you just do:
const type = cast.List(cast.Map(cast.int, cast.List(cast.String)));
var l = type.cast(blob_of_untyped_data);
That is, you basically change the <
to (
, and add a .cast
at the end, and you're done.
Is the purpose to fail at parse-time instead of failing when accessed later?
No. The current implementation is eager, but one question is whether it's important to have a lazy API. It should be easy enough to do in any of a number of different ways.
Is the additional value over List.cast
and Map.cast<K,V> that you can more deeply describe the schema?
Yes. I see a lot of people writing (struggling to write) repeated code that looks like:
var typedData = (untyped_blob as List).map((a) => (a as Map).cast<String, int>())
The problem is that because casts are shallow, you either need to do the casts as you unspool the data, or you need to use a variety of techniques to roll your own eager deep casts.
This library provides a single interface to do that.
It appears one has to define types and names in at least 3 different places. In two of those places, the names are stringly-typed.
I don't follow this, what do you mean? In the example above, I wrote the "type" exactly once, and didn't use any strings?
I'm going to trial this in the open_api package to better understand and have better feedback. OpenAPI a very large and diverse JSON document, and it has been challenging to fit it into Dart 2.
First thing I've ran into is that I have to specify every key. There are top-level values that are Map<String, dynamic>
. I am unable to find (and it sounds silly to say) a cast.dynamic
or similar.
final schema = cast.Keyed<String, dynamic>({
"swagger": cast.String,
"host": cast.String,
"basePath": cast.String,
"schemes": cast.List(cast.String),
"consumes": cast.List(cast.String),
"produces": cast.List(cast.String),
"security": cast.List(cast.Map(cast.String, cast.List(cast.String))),
"info": /* what goes here? It is a Map<String, dynamic> */
});
edit: There is clearly cast.Keyed<String, dynamic>, but does this mean I have to describe the entire document in one schema?
I have to bounce around a few meetings here shortly, but I've included a peek so far. What I meant by specify the type/name multiple times should be apparent from below (unless I'm totally doing this wrong). For example, I specify the field List<String> schemes
in my type, the key-value pair "schemes": cast.List(cast.String)
, and the assignment schemes = typed["schemes"]
. I've left some comments in line.
// 'APIObject' base class requires 'decode' implementation, allows deeply nested
// 'APIObject's to be parsed via their 'decode' method.
class APIDocument extends APIObject {
String version;
List<String> schemes;
... snip ...
// JSONObject is a MapMixin that has decoding and encoding methods for various types.
// A JSON string is decoded such that each Map is revived as a JSONObject.
void decode(JSONObject object) {
super.decode(object);
final schema = cast.Keyed<String, dynamic>({
"swagger": cast.String,
"host": cast.String,
"basePath": cast.String,
"schemes": cast.List(cast.String),
"consumes": cast.List(cast.String),
"produces": cast.List(cast.String),
"security": cast.List(cast.Map(cast.String, cast.List(cast.String))),
// Nested objects have their own schema that they define
// Is this where I make the argument for inheriting static methods? :)
"info": APIInfo.keyedSchema,
"tags": cast.List(APITag.keyedSchema),
"paths": cast.Map(cast.String, APIPath.keyedSchema),
"responses": cast.Map(cast.String, APIResponse.keyedSchema),
"parameters": cast.Map(cast.String, APIParameter.keyedSchema),
"definitions": cast.Map(cast.String, APISchemaObject.keyedSchema),
"securityDefinitions": cast.Map(cast.String, APISecurityScheme.keyedSchema)
});
final typed = schema.cast(object);
// This looks fairly clean.
version = typed["swagger"];
host = typed["host"];
basePath = typed["basePath"];
schemes = typed["schemes"];
consumes = typed["consumes"];
produces = typed["produces"];
security = typed["security"];
// This is what I need to figure out next. There was previous behavior to handling
// decoding deep collections into their associated Dart types that now has to get
// evaluated.
info = object.decodeObject("info", () => new APIInfo());
tags = object.decodeObjects("tags", () => new APITag());
paths = object.decodeObjectMap("paths", () => new APIPath());
responses = object.decodeObjectMap("responses", () => new APIResponse());
parameters = object.decodeObjectMap("parameters", () => new APIParameter());
definitions =
object.decodeObjectMap("definitions", () => new APISchemaObject());
securityDefinitions = object.decodeObjectMap(
"securityDefinitions", () => new APISecurityScheme());
}
... snip ...
}
'm going to trial this in the open_api package to better understand and have better feedback. OpenAPI a very large and diverse JSON document, and it has been challenging to fit it into Dart 2.
@joeconwaystk Thanks for looking at this - I was hoping to get this exact kind of feedback. I have no experience with real json applications. Even if this approach doesn't work out for that use case, the process of seeing where it falls down will be helpful for finding the right solution.
I am unable to find (and it sounds silly to say) a cast.dynamic or similar.
Just added this as cast.any
, and added the ability to specify a default for Keyed
. You should be able to do this now:
final schema = cast.Keyed<String, dynamic>({
"swagger": cast.String,
"host": cast.String,
"basePath": cast.String,
"schemes": cast.List(cast.String),
"consumes": cast.List(cast.String),
"produces": cast.List(cast.String),
"security": cast.List(cast.Map(cast.String, cast.List(cast.String))),
"info": cast.Map(cast.String, cast.any) // Could also probably be just cast.any if you don't want to validate
});
@joeconwaystk
I don't know what all of the types involved in your example above are, but below is a quick take on how you might structure it. A couple of comments:
Keyed
schemas to be merged to support the "inheritance" behavior more cleanly.JSONObject
type with Map<String, dynamic>
, not sure if it was important to your example?class APIObject {
final String apiObjectField;
static const Map<String, cast.Cast<dynamic>> fieldDescriptors = {
"apiObjectField": cast.String,
};
static cast.Cast<APIObject> schema =
cast.Apply((a) => APIObject.fromJson(a), cast.Keyed(fieldDescriptors));
APIObject.fromJson(Map<String, dynamic> fields)
: apiObjectField = fields["apiObjectField"];
static APIObject decode(Map<String, dynamic> object) => schema.cast(object);
}
class APITag {
static cast.Cast<APITag> schema =
cast.Apply((a) => APITag.fromJson(a), cast.int);
APITag.fromJson(int x);
}
class APIInfo {
static cast.Cast<APIInfo> schema =
cast.Apply((a) => APIInfo.fromJson(a), cast.int);
APIInfo.fromJson(int x);
}
class APIDocument extends APIObject {
String version;
String host;
String basePath;
List<String> schemes;
List<String> consumes;
List<String> produces;
List<Map<String, List<String>>> security;
APIInfo info;
List<APITag> tags;
static Map<String, cast.Cast<dynamic>> fieldDescriptors = {
"swagger": cast.String,
"host": cast.String,
"basePath": cast.String,
"schemes": cast.List(cast.String),
"consumes": cast.List(cast.String),
"produces": cast.List(cast.String),
"security": cast.List(cast.Map(cast.String, cast.List(cast.String))),
"info": APIInfo.schema,
"tags": cast.List(APITag.schema),
}..addAll(APIObject.fieldDescriptors);
static cast.Cast<APIDocument> schema =
cast.Apply((a) => APIDocument.fromJson(a), cast.Keyed(fieldDescriptors));
APIDocument.fromJson(Map<String, dynamic> fields)
: version = fields["swagger"],
host = fields["host"],
basePath = fields["basePath"],
schemes = fields["schemes"],
consumes = fields["consumes"],
produces = fields["produces"],
security = fields["security"],
info = fields["info"],
tags = fields["tags"],
super.fromJson(fields);
static APIDocument decode(Map<String, dynamic> object) => schema.cast(object);
}
This seems super useful! It looks like a great tool for doing the sort of inline List<Map<int, List<String>>>
casts you're talking about, and I bet it could be expanded beyond that as well. That said, I'm worried that it's not going to be powerful enough to work for the more advanced cases I've run into. The biggest difficulties I see on the horizon are:
Handling source spans. This doesn't matter when dealing with JSON, but it's super important for YAML to produce SourceSpanFormatException
s when casts fail so that users know where in their file they made a mistake. It's also important that, even when casts succeed, source spans for the parsed values are available so that they can be stored and used if errors occur later on.
The way the yaml
package handles this is to return YamlMap
s and YamlList
s that expose SourceSpan span
fields and fields such as List<YamlNode> nodes
. These nodes
fields contain the same values as the original collection, except that non-collections are wrapped in the YamlScalar
class which has a SourceSpan span
field.
As written, the cast
library would ignore these spans and just return their underlying values, which means users would have no access to their source spans.
The Keyed
cast relies heavily on implicit casts from dynamic to work gracefully. The trajectory of Dart has generally been towards adding warnings for implicit behavior like this, which will make this pattern a lot more painful. Even if we decide that's not an issue, this behavior creates a class of potential errors which only appear at runtime—for example, in your code sample above, if I changed List<String> schemes
to List<int> schemes
, it would pass static analysis and fail at runtime, not because the untyped data was wrong but because the code's types were incorrect.
I like that this is extensible, but I'm worried that the barrier to entry for creating custom casts is too high. In the configuration-parsing code I've written, it's very common to want to express things like "cast to a string and parse it as a Version
", and the code necessary to do that is pretty terse. Having to go through the boilerplate of creating a new class for every piece of behavior I want to factor out would be a heavy cost.
It's worth looking at matcher
as an example. It requires new matchers to be defined as classes, and as a consequence most code just uses the predicate()
matcher instead of defining custom matchers, even though predicate()
provides much worse output when it fails.
OK, I have written code to use this package to parse OpenAPI specs. OpenAPI is a good use case because a document can have references to other objects and those references can be cyclical; these are complex behaviors. Using this library, it now parses both Kubernetes' and Stripe's specs (both are big and complex).
Some of the things that weren't so great:
Keyed
object.Like @nex3, I could not see a beginner being able to do this, or even know to do this. They'd naively try to assign, then cast as
, then hopefully find something on StackOverflow. Decoding JSON from an HTTP request would likely be one of the first things a developer trying out Dart will try. I think this is something the language will have to fix (can the as
keyword do the right thing?)
I forked the library; the fork relaxes Keyed
requirements and is here. I created a new library that has a base class for encoding/decoding. Subclasses override encode
/decode
, and optionally provide a cast map. Its complexity is to due cyclical $refs, and it is here. It is very similar to the Swift library we use for the same behavior.
An example usage looks like this:
void main() {
final data = json.decode(...);
final p = new Parent()
..decode(KeyedArchive.unarchive(data));
}
class Child extends Coding { ... snip ... }
class Parent extends Coding {
List<String> requiredKeys;
String name;
List<Child> children;
@override
Map<String, cast.Cast> get castMap => {
"requiredKeys": cast.List(cast.String),
};
@override
void decode(KeyedArchive object) {
super.decode(object);
name = object.decode("name");
requiredKeys = object.decode("requiredKeys");
children = object.decodeObjectList("children", () => Child());
}
@override
void encode(KeyedArchive object) {
... snip ...
}
}
Note that the only parameterized types need to be casted. If the type is simple, no casting is needed. If the type is complex enough that it needs its own Coding
type, no casting is needed.
Thanks for looking at this, Leaf. Generally speaking, I like the initial direction. I think we'll want to work on reducing redundancy (as mentioned by Joe and Nate), and I second Natalie's suggestions to look at, say matcher. Ultimately, are you envisioning incorporating this into the language proper or keeping it as a third-party package?
@nex3
Handling source spans.
Ack. I need to think about parsing. One direction I can imagine this going is that something like yaml
could provide its own YamlMap
, YamlList
etc cast/parsers, by extending things from cast
. But I haven't pushed on what a parsing API would look like yet.
The Keyed cast relies heavily on implicit casts from dynamic to work gracefully. .... Even if we decide that's not an issue, this behavior creates a class of potential errors which only appear at runtime
I don't know how to get around runtime failures until/when we get union types. We can certainly set it up to avoid implicit casts by using something like a Reader
class to wrap dynamic
values, which provides methods like .asMap
, .asList
, etc.
I like that this is extensible, but I'm worried that the barrier to entry for creating custom casts is too high. In the configuration-parsing code I've written, it's very common to want to express things like "cast to a string and parse it as a Version",
Just to be sure I understand, is this not terse enough?
const parseVersion = cast.Apply((a) => Version(a), cast.String);
@joeconwaystk Thanks for pushing on this, I'll take a look.
@efortuna
I second Natalie's suggestions to look at, say matcher.
I'll take a look. Is there a specific takeaway to look for, or just general API advice? I thought @nex3 was basically pointing at it as a negative example?
Ultimately, are you envisioning incorporating this into the language proper or keeping it as a third-party package?
That depends on how useful we can make this. My current thinking is that:
Ack. I need to think about parsing. One direction I can imagine this going is that something like yaml could provide its own
YamlMap
,YamlList
etc cast/parsers, by extending things from cast. But I haven't pushed on what a parsing API would look like yet.
This means that you can't re-use code for parsing YAML and non-YAML (which I believe build
does currently) without sacrificing usability in the YAML case, though. That's a big reason why I like the idea of having the API be method-level rather than class-level: it lets classes override the existing behavior to provide behavior that makes sense in their situations.
I don't know how to get around runtime failures until/when we get union types. We can certainly set it up to avoid implicit casts by using something like a
Reader
class to wrap dynamic values, which provides methods like.asMap
,.asList
, etc.
This kind of Reader
class is exactly the kind of design I had in mind originally :smiley:.
Just to be sure I understand, is this not terse enough?
const parseVersion = cast.Apply((a) => Version(a), cast.String);
That's definitely a step in the right direction. It does mean that if I want to define my own custom casts, they look different than built-in casts, though. Maybe it would be better to hide the class names for built-ins and only expose them as fields or methods?
That's a big reason why I like the idea of having the API be method-level rather than class-level:
I'm not sure I understand this, can you expand?
This kind of Reader class is exactly the kind of design I had in mind originally
I know. :) I filed this issue to explore it: https://github.com/leafpetersen/cast/issues/2 , but I haven't gotten back to it yet. I think the idea would be a similar schema, except that Readable<T>.read
gives you back a Reader<T>
instead of a T
. And then Reader.asMap<K, V>
gives you back a Map<K, Reader<V>>
maybe?
Maybe it would be better to hide the class names for built-ins and only expose them as fields or methods?
Again, not totally sure I follow. What do you imagine the client side code looking like?
I'm not sure I understand this, can you expand?
I just mean like the Reader
API you're describing: something where the user interacts primarily by calling methods on an object (that could potentially be overridden per-object) rather than invoking top-level members.
I think the idea would be a similar schema, except that
Readable<T>.read
gives you back aReader<T>
instead of aT
. And thenReader.asMap<K, V>
gives you back aMap<K, Reader<V>>
maybe?
This time I'm not following. Could you give a quick example sketch?
What do you imagine the client side code looking like?
Just that instead of writing cast.Apply(...)
, you'd write cast.apply(...)
. Calling a function rather than invoking a constructor, because it's easier for users to define functions than classes.
cc @mjohnsullivan @filiph for input on the original plan for a system to cast dynamic data like json.
Leaf:
I'll take a look. Is there a specific takeaway to look for, or just general API advice? I thought @nex3 was basically pointing at it as a negative example?
Ah, I misread. I was just intending to say I defer to @nex3 as she has much more experience about these things than me.
This may not be terse enough to provide a good general json decoding API
Got it. I hope we can make it terse enough to be a part of the language!
Entirely curious: What does Kotlin/Swift/C# do here?
... I realize none of them (really) compile to JavaScript, so maybe that's the odd-man out.
In Swift, you declare a class to implement Codable
, and declare an enum that maps field names to key name inside that class. You provide this type as an argument to the JSON decoding method, and you get your object graph as a result. There is some magic that happens that I am unfamiliar with, but someone on our team is very familiar with if you are interested in more detail. This is a Swift official thing.
In Kotlin, there are more ways to do it, but our team (and I'd argue most) are using Gson - which is just annotations + reflection.
Thanks @joeconwaystk. I read a bit about Codable
: https://medium.com/xcblog/painless-json-parsing-with-swift-codable-2c0beaeb21c1
... seems close to what @leafpetersen is proposing, but with language support (basically a variant of data classes here).
My own thoughts in the same direction were based on parsers (probably unsurprising), so something like:
var parseStruct = parseMap<dynamic>({
"id": parseInt,
"name": parseString,
"age": parseInt,
"size": parseDouble,
"extra" parseList(parseString), // infers parseList<String> from the return type of parseString.
});
var result = parseStruct(string); // Map<String, dynamic>
int id = result["id"] as int;
String name = result["name"];
List<String> extra = result["extra"];
I even had a proof-of-concept implementation somewhere ... found it: https://dartpad.dartlang.org/74834f1f6f1d941c33dcebb61cdfd9ac
It doesn't generalize to structures other than JSON, but it allows a slightly more efficient parsing when you know what to expect, and it doesn't create intermediate data structures.
@lrhn My thinking was to provide a .parse
method on Cast<T>
that basically does what you propose above. So maybe it should be called Schema
instead of Cast
?
@leafpetersen I'm exploring some other ways of looking at this. What kind of comparisons can I do on type arguments? And is there anything I can do to decompose them at runtime?
For example, I'd like to do something like:
T decode<T>(String key) {
if (T is a List of some kind) {
return _decodeList<U which is the type argument to T>(key);
} else if (T is a Map of some kind) {
return _decodeMap<K, V>(key);
}
return _inner[key];
}
edit: What's somewhat interesting to me is that variable is List<int>
is syntactically valid, but Type == List<int>
is not. Assuming that could become a valid syntax, would Type == List<Null>
catch all List<T>
types regardless of T
?
The reason variable is List<int>
is valid is that the parser knows that a type must come after the is
, but in Type == List<int>
it does not, which makes parsing harder. We have already accepted that generic method invocations are syntactically ambiguous, so there is a chance we could allow Type == List<int>
as well (unless parsing it is even harder than invocations because there is no parenthesis to recognize).
If it was valid syntax, then Type == List<null>
would likely only be true if type is exactly List<null>
, the same way that this does today:
print(<Null>[].runtimeType == <int>[].runtimeType); // false.
Type objects are basically useless for actual comparisons.
What you can do with type variables (but not Type
objects) today is:
bool isSubtype<Sub, Super>() => <Sub>[] is List<Super>;
(too expensive to be really practical, but valid).
Yes, as @lrhn says, there's currently no good way to programmatically reflect on type arguments. We've considered this and may do something in the future, but we need to be sure that we don't negatively impact generated code size and performance.
At what point would it make sense to bake this more into the language? E.g., a compile-time "reflective" version like:
class MyClass extends Struct {
int id;
String name;
int age;
double size;
List<String> extra;
}
T parseStruct<T extends Struct>() { .. } // Magic method
var result = parseStruct<MyClass>(string);
int id = result.id;
String name = result.name;
List<String> extra = result.extra;
I'm abusing generics with parseStruct
. I'm imagining the compiler, given the above, would actually generate something like @leafpetersen or @lrhn 's earlier handwritten code. Structs
would be heavily restricted (e.g., compile time error on 'bad' field types).
It is interesting to see how casting is done with mirrors (attached below). There is minimal reflective behavior used, and other than newInstance
, the reflective behavior had already been proposed as an extension to Type
in Dart 2 IIRC. Does adding the ability to instantiate objects from a Type
, and going ahead with the planned extensions to Type
serve as at least a temporary measure before something like the above becomes available?
dynamic runtimeCast(dynamic object, TypeMirror intoType) {
if (intoType.reflectedType == dynamic) {
return object;
}
final objectType = reflect(object).type;
if (objectType.isAssignableTo(intoType)) {
return object;
}
if (intoType.isSubtypeOf(reflectType(List))) {
if (object is! List) {
throw new CastError();
}
final elementType = intoType.typeArguments.first;
final elements = (object as List).map((e) => runtimeCast(e, elementType));
return (intoType as ClassMirror).newInstance(#from, [elements]).reflectee;
} else if (intoType.isSubtypeOf(reflectType(Map, [String, dynamic]))) {
if (object is! Map<String, dynamic>) {
throw new CastError();
}
final Map<String, dynamic> output = (intoType as ClassMirror).newInstance(const Symbol(""), []).reflectee;
final valueType = intoType.typeArguments.last;
(object as Map<String, dynamic>).forEach((key, val) {
output[key] = runtimeCast(val, valueType);
});
return output;
}
throw new CastError();
}
@joeconwaystk The Type
extensions were put on hold due to concerns on code size / tree shakability as well. The compiler really wants to know what object
and intoType
(in your example) can possibly be at compile-time, so it can prune / rename unused fields, etc.
BTW, with all these prototypes, please consider measuring dart2js
output size. I think a great solution here would include:
@kevmoo
I'd be wary about adding a plain "open type" functionality to the language at this point. That's mainly because if we ever add scoped extension methods, those should have access to the type argument of the type they apply to. So, something like:
R List<E>.open<R>(R Function<X>(List<X> self) callback) => callback<E>(this);
(defining a generic extension method on List<E>
with access to E
) would give us that functionality anyway.
This is different from, e.g., C# and Java, because Dart allows covariant generics and reifies the type parameters. That combination allows us to actually have a different type variable than the static type, and have access to it at runtime. I hope that doesn't make extension methods intractable, but it does add this extra complication - which might also be a feature.
We should chat about how the json_serializable approach applies here, too.
This is an attempt to flesh out an idea that I've kicked around with a number of people to address the difficulty of interacting with untyped structured data in Dart 2. The canonical example is interacting with json, where the result of parsing is a large untyped blob of data that the user wants to be able to interact with in a typed way. This also arises with RPC calls in flutter, and also has come up with yaml.
See README for some examples of how this API would be used, as well as the test files.
Currently all that this provides is casting, but if the API turns out to work well, I would like us to explore using the schema to drive json and yaml parsing directly. That is, instead of parsing into untyped data and then casting, consider using the schema in the parser/RPC call/whatever to generate code with the expected reified type directly.
This meta issue is for discussion of the general idea of the API, as well as specifics around naming, missing functionality, etc.
I'll file a few specific sub-issues for additional discussions around the following: