Open stanch opened 11 years ago
What about
object M {
class B
}
{
@foo class X
import M._
object X extends B
}
What about it? In that case you'd need to move both the object and the import before the annotated class (provided, that annotations can see their lexical scope).
Good point, but there are two problems with that: 1) Relocation of imports might change meaning of names. 2) Shuffling definitions might confuse the forward reference checker.
For more details about 2, see The Scala Language Specification page 45:
The scope of a name introduced by a declaration or definition is the whole statement sequence containing the binding. However, there is a restriction on forward references in blocks...
I'm not sure why is this a problem. We merely impose restrictions on how macro-annotations can be used, and then it's up to programmer to arrange imports/definitions properly and resolve possible problems. Neither macro nor compiler should do that, of course.
Ah, by "you" you meant the programmer, not the compiler, right? If yes, then what should the compiler say about the aforementioned snippet? Give a compilation error?
Sorry for the confusion, by "you" I meant the programmer. My initial question was if it is possible not to expand both companions simultaneously when one of them is macro-annotated, but rather stick to moving expansion point. For this snippet a compilation error might be too harsh, because @foo
macro might not affect object X
at all and the whole code is safe. But than again, we can't determine that in general case, so compilation error it is. The error message could say that unannotated companion should precede annotated one. This also implies that companions can't be both macro-annotated at the same time, but that is probably an extremely rare use case anyway.
Sorry, I’m late to the party, but here’s a random thought: we could make a distinction between “whitebox” and “blackbox” macro-annotations. “blackbox” ones should promise that they do not mess with the declaration names and do not add or remove declarations. Now, everything should be visible for the typechecker, however, referencing whitebox-annotated stuff should produce an error. The tricky thing is of course how to restrict blackbox behavior without having to expand annotations. To the problems:
1b
seems fine, that’s a price to pay for forward-referencing;1c
is gone;2d
is now not that bad, since no other annotation will be able to reference class X
or object X
.2c
is tough though :)
Also I think forward-referencing inside blocks/templates is bad and those who do it should feel bad!
Well, open recursion is what you get when you sign up for OOP :)
Can blackbox annotations change signatures of the annottees without changing their names? Could you also give an example of a code snippet that "references whitebox-annotated stuff" and an error that it produces?
ClassDef
s and ModuleDef
s, this would mean changing the type signatures of some of the members. The original type of the annottee, however, should not be available without performing the expansion. Thus, typechecking a blackbox-annotated definition could trigger a chain of expansions of fellow annottees, but they all would be of a blackbox kind (see 2), so what can go wrong :) @black val x = 8 + "Y.z"
@white object Y { val z: Int = 4 }
where @black
tries to eval "Y.z"
and consequently typecheks Y.z
. At this point, the Context
should produce a compile-time error saying Illegal reference to a whitebox-annotated member “Y” in current scope
.
What about a compiler plugin then? :) (I am no expert on these)
What about a compiler plugin?
@xeno-by I meant doing scala-workflow
as one.
Without dirty hacks, compiler plugins can't change how typer works (with dirty hacks that's going to be roughly what an annotation-based solution is, I think). Analyzer plugins can do that, but they can only postprocess typechecking results, not change typechecking completely, like we need to do here.
Hey everyone,
I just wanted to ask whether there is some progress regarding the scoping of the typeCheck
method. I ran into the following usecase:
I want to analyse the type of a wrapped class (using the pimp my library pattern) in order to generate some methods.
implicit class FooWrapper(@myMacro self: Foo) {}
Currently I use the c.typeCheck(q"(null.asInstanceOf[$tpt])").tpe
trick to get hands on the Type of the wrapped class, but this does neither work with Foo
being a type alias, nor type-parametrized.
@b-studios So far there hasn't been much progress unfortunately - this is quite a tricky issue. Could you elaborate on your use case (a standalone github project would be ideal), so that we could possibly come up with a workaround?
@xeno-by The usecase in short: I use the Type
of the annotated parameter (Foo
in the above example) to search for Java idiomatic getter and setter methods (getBar, setBar) in order to generate scala idiomatic ones (bar, bar_=).
I also published the code on github. In this test file line 81ff. are the examples which are currently not working. The type of the wrapped class is being resolved using Utils.toType.
Thanks alot
@b-studios Thanks for taking time to publish the reproduction! I'll try to take a look today/tomorrow.
@xeno-by In order to save you some time I extracted the problem inside of a new branch. This way it is only two short files to take a look at: macrodefinition and usage
@b-studios The first problem is indeed https://github.com/scalamacros/paradise/issues/14, and you'll have to reformulate the affected code at the moment. How much of a blocker is that to you?
The second problem, I think, could be successfully handled within the current framework. How about crafting a more elaborate tree for c.typeCheck, something like { class Dummy[T] { type Dummy123 = $tpt }; () }
?
Also Happy New Year for everyone checking out this thread right now :)
@xeno-by The type alias issue is not so much of a problem right now, since I currently have influence on the client code.
Regarding the type parameter problem: Great idea to bind the missing parameter in the crafted type tree. I will try to use this approach, but I fear it is only applicable if the binding instance of T
is known.
@b-studios Could you elaborate on potential problems with the proposed approach for handling type parameters?
@xeno-by In the example the macro is applied in the context of the class definition where the type parameters are bound. This way one could query the type parameters and manually craft a tree - binding those type parameters in order to typeCheck
the annotated type constructor (As you described above).
If the type arguments of the type constructor are bound outside of the annotated class definition it might be difficult to find the binding occurrence (one would have to reimplement the lookup of types). Please correct me, I just started using scala macros and I don't know how such a lookup could be implemented. I'm happy if I am just wrong with this assumption.
Since I perform the typecheck to retreive the Type
of a tree, the above solution does not work anyway because it always returns Unit
. So I came up with a dirty solution to be evaluated in the next days:
If the type tree is an AppliedTypeTree
, typecheck null.isInstanceOf[$tpt[..$wildcards]]
with the appropriate number of wildcards. This way I prevent access to the type variables and the "not found: type T" typecheck error.
@b-studios Oh wow, this is something that I didn't expect:
class X[T <: String] {
class test[T <: Int] {
implicit class XWrapper(@accessors self: P[T]) {}
}
}
Prints members of String, not Int. Yes, that's a problem - I'll try to fix it.
@b-studios Yes, the type of the returned tree will be Unit
, but if you inspect the types of its inner trees, then you'll be able to get to the type of Dummy123.
@b-studios I have fixed the problem with typecheck not seeing type parameters declared in enclosing classes. In the morning (in ~8-10 hours), if all the builds are green, I'll publish 2.0.0-M2 with the fix.
@b-studios 2.0.0-M2 is now published for 2.10.3 and 2.11.0-M7. Please let me know whether it solves the problem with type parameters.
@xeno-by Sorry for the late message. Thank you alot! Your fix actually works perfectly for member annotations like
trait Foo[T] {
@annotation def foo: T
}
I guess there is a reason for the constructor argument annotations not to work?
class Foo[T](@annotation foo: Bar[T])
// scala.reflect.macros.TypecheckException: not found: type T
but if you inspect the types of its inner trees, then you'll be able to get to the type of Dummy123
I tried this, but extracting Dummy123
from the results of typeCheck
and asking for the type still yielded null
. I have to investigate a little more, but sadly won't find the time in the next few days.
1) Annotations on type and value parameters of classes and methods expand the enclosing class/method, not just the member itself. Therefore T
doesn't exist yet when the ctor param annotation expands. However you could grab it from the class being expanded and wrap the dummy as before.
2) Here's what I had in mind wrt Dummy123: https://gist.github.com/xeno-by/8255893
@xeno-by Ad 2) I was using dummy123.tpe
on the typechecked tree which always yielded <notype>
. Switching to dummy123.symbol.typeSignature
(as you did in your gist) works. Thanks a lot.
Hi,
Do you have any plans for migration from now deprecated untyped macros? I am not still sure if this is going to work, but maybe one could provide
implicit
views fromMonad[A]
/Functor[A]
toA
(e.g. with a@compileTimeOnly
annotation), so that the code typechecks, and the macro would just disregard them, replacing withmap
s orbind
s. What do you think?Nick