Closed dmouton closed 6 years ago
I think method overloading can be tricky in Haxe, especially when having optional parameters. Let's say you have the following method:
function foo(?a:Int, ?b:String) {
trace("foo", a, b);
}
If you call foo("hello")
then that is perfectly valid and will trace foo:,null,hello
. But what if we do this:
function foo(?a:String, ?b:Int) trace("foo one:", a, b);
function foo(?a:Int, ?b:String) trace("foo two:", a, b);
Let's say, with this example, you call foo(1)
or foo("hello")
, what should be traced and why?
For me there are two patterns when using method overloading.
The fist is "argument conversion":
function print(msg:String):Void;
function print(msg:Array<String>):Void;
function print(msg:Float):Void;
The second is "representation conversion":
function show(line:Int):Void;
function show(start:Int, end:Int):Void;
Which boils down to making your method "generic" and will internally convert them to the same type. For instance in my examples convert the array/float to a string, and in the second use the start/end position of the line.
So you can easily do with multiple functions, like print_array
or show_line
,
it's a little less convenient but nothing major, it doesn't bother me as much as when I started using haxe.
One place where it'd be nice is constructors,
var a = Test.fromString("a");
var b = new Test(1);
because those syntax are just too dissimilar.
But then again, it's not a show stopper.
My opinion on method overloading changes every time I contact with it, but in general I think I'm more against it than not nowadays.
The reason is that it looks quite nice in C#/Java standard APIs, but in my practice (I use C# every day at work) it's often abused and makes code readability worse, so I would rather have several methods with good names than confusing overloaded mess.
That said, I'm not against having method overloading in Haxe either, because it's useful SOMETIMES. Last time I needed it for something like:
function request(type:RequestType<Void>, callback:Void->Void); // no arg callback
function request<T>(type:RequestType<T>, callback:T->Void); // one arg callback
Implementation-wise there are a lot of questions regarding Dynamic and structural typing, as well as how to generate overloaded methods in the output. So a good starting point would be to implement @Simn's idea of extern-inline overloading, so there's no overloaded method in runtime and it's always inlined to call, e.g.:
@:overload @:extern inline function add(i:Int) addInt(i);
@:overload @:extern inline function add(s:String) addString(s);
I think having this would satisfy the reasonable need for method overloading.
Regarding @markknol's concern, I think it's fair to just give "ambiguous definition" errors in cases like that. I think that's what C# does.
One more thing regarding overloading that I really want to be implemented is proper overload support for externs as described in https://github.com/HaxeFoundation/haxe/issues/5201.
I think we should deprecate optional argument skipping anyway.
There isn't no impact on existing code. It would affect macro users (type building, etc).
There isn't no impact on existing code. It would affect macro users (type building, etc).
Not sure about that. On syntax level (e.g. in @:build macros) we already allow fields with the same name, and on typed level there's also overloads
property in ClassField
structure.
I don't have a vote, but I would to express that I am not very for this proposal. The main reason is that abstracts (plus optional arguments) can cover most, if not all, of the method overloading usage. And method overloading would probably suffer from the same deficits that abstracts have, for example, resolving the "correct" overload (or cast in abstract's case).
The most common usage of method overloading is to allow variations in argument types/numbers and all of the variations will eventually be directed to a "core" implementation. In that case abstracts + optional arguments should do and the run time cost would be more or less the same vs native method overload.
If each overload has its very own implementation, on one hand there will be no overhead of redirecting to the "core" function. But on the other hand I would challenge that those overloads should be individual functions and named differently.
The motivation of the proposal doesn't really convince me. While Java and C# has method overloading which is handy, Haxe has abstract which is a par, if not superior. Working with extern is already supported by @:overload
. If anything can be done better, it is to change the syntax of @:overload
from putting the definition as the meta param to tagging instance method directly with the meta.
And for the downside of optional arguments that one need to check for null at runtime, I think that can be solved by allowing arbitrary expressions as defaults for them.
e.g. function foo(bar:Int = getDefaultInt())
It is because optional arguments are compile-time feature, so that the compiler just need to simply insert the specified expression when the argument is missing. (And someone may start to think about null-safety here, but it is out of scope right now)
Not sure about that. On syntax level (e.g. in @:build macros) we already allow fields with the same name, and on typed level there's also overloads property in ClassField structure.
That depends on how we interpret "impact". To me it is that old code will work as-is without any changes. But for some macro libraries they may need additional logic to handle duplicate fields otherwise funny things may happen (e.g. only one foo()
is processed while there is a few overloads). I am not sure as well, I don't have a concrete case in mind right now.
I really agree with forcing this type of overload as inline
/ "compile time stuff", because otherwise it would quite certainly mess up with Reflection.
Regarding this example:
function request(type:RequestType<Void>, callback:Void->Void); // no arg callback
function request<T>(type:RequestType<T>, callback:T->Void); // one arg callback
I think this particular case could (should?) be solved with optional / default type parameter, something like class RequestType<T=Void>
.
I agree so bad with deprecating optional argument skipping / not using optional arguments for "multitype", because it's really extremely error prone.
For instance, those working in JS might think that with function foo( ?bar:Int, ?toto:String )
, since you can foo("test")
in Haxe, you should be able to do the same in JS, or via reflection, which is misleading.
Supporting known APIs (such as WebGL and HTML5 Typed Arrays) is difficult with method overloading, and would be impossible without optional argument skipping.
There is already an overload syntax for externs. I believe that Haxe abstracts should support overloading.
Understandably, there are APIs (such as Reflect) which would struggle with method overloading, but if it is supported by an abstract, we can provide the sugar we need to support typed arrays and WebGL, without complicated runtime features.
I don't mind supporting method overloading for externs. We already have the code in place for Java/C# anyway, so if that can be factored out, why not.
I'd be interested in @waneck's opinion on the matter.
If you want this for non-extern types then the answer is "not gonna happen".
If you want this for non-extern types then the answer is "not gonna happen".
What about extern methods on non-extern types?
What about extern methods on non-extern types?
Not sure about that, if inline is forced anyway then it might be an option.
If inline is forced anyway then it might be an option.
Yeah, I thought that was your idea regarding non-extern overloads that would nicely avoid reflection/code generation issues. and I personally think it's a good solution for a language like Haxe - compile-time dispatch for known types. Combined with macros people could implement non-inline versions with custom code generation too.
Yes I always think this would be a nice approach, but then @waneck tells me some horror stories about the overload algorithm. That's why I'm interested in his opinion on this.
There is nothing really scary about method overloading. We already have a nice overloading resolution algorithm for Java and C# (with the @:overload
meta). The general idea lies on what is "more specific". For example, if you have class A extends Base, and two functions that take either A or Base, passing an A
type makes the algorithm prefer the function that takes the A
type. Sometimes some functions may be equally specific (e.g. they are preferred on one argument but the other is preferred on another argument). In this case, we display an error, and the user must explicitly type what arguments he/she wants to use. There are also some specificities wrt Dynamics - converting from Dynamic to any other type is always less preferred. So if you're passing a Dynamic, a function that takes a Dynamic is preferred. That makes sense if you think that Dynamic would be like a base Object
type.
There is also a problem with relation to monomorphs. We don't want monomorphs to bind to the first overload check. If I'm not mistaken, when testing the overload resoltuion, monomorphs are treated as Dynamics. I think a better approach would be to mark all monomorphs bound, and then reset them for every overload try.
Also, the current algorithm does not deal with overload resolution for anonymous and function types. That's because it is mainly used to interact with native Java/C# code, which don't have this concept. I don't think it's too difficult to come up with an algorithm for overload resolution on these cases (following the "prefer the most specific argument" convention), but we need to be careful when defining some of the rules. For example, I think we should treat anonymous types like we treat function arguments themselves - in order to deal with each type separately (how more specific it is, if there are missing optional arguments, or if there are optional arguments that are not missing). I don't think it's too hard to account for that, but we need to think of what is preferred on each of these cases.
@nadako That particular example can be solved by using abstract
(e.g. in tink_core
):
abstract Callback<T>(T -> Void) from T -> Void {
public inline function new(f: T -> Void) {
this = f;
}
public inline function invoke(data: T): Void {
return this(data);
}
@:from public static inline function fromNiladic<A>(f: Void -> Void): Callback<A> {
return new Callback(function (_) { return f(); });
}
}
function request<T>(type: RequestType<T>, callback: Callback<T>);
When using the request
method, you can pass in ordinary functions:
foo.request(type, function (t) { ... });
foo.request(type, function () { ... });
abstract
gives a limited amount of overloading, but it requires all of the overloads to be converted into a base type or interface (in this case the base type is T -> Void
)
Note: because it is abstract
and it uses inline
, the only performance penalty is when it has to convert from Void -> Void
to T -> Void
, otherwise the performance is exactly the same as regular functions.
In particular, using T -> Void
is exactly the same performance as using functions.
There's a couple other useful patterns I've found:
Using abstract
with an interface:
interface IFoo {
function foo(): Void;
}
abstract Foo(IFoo) from IFoo to IFoo {
// define various @:from here
}
Using abstract
with a structure type:
private typedef IFoo = {
function foo(): Void;
}
abstract Foo(IFoo) from IFoo to IFoo {
// define various @:from here
}
This allows you to define utility functions or implicit conversions which work with a wide variety of types.
This doesn't completely solve the problem of overloads, but it does give a lot of flexibility.
@dmouton If this proposal is still relevant, then please update it with details on the questions raised in the comments:
We are going to look into overloading support for Haxe 4.1. We decided to close this proposal because it's too imprecise. I'll open a new one or deal with this in an issue.
Rendered text