Open stanch opened 11 years ago
Hi,
It would certainly be great to get rid of untyped macros, but I can't think of how to do it right now.
To make your solution work, we would need to have implicit view for every instance of monad/functor separately, e.g. Option[A] => A
(with the way things currently are) and somehow make it available only within context
/workflow
block so it won't mess with other code. Composition of functors/idioms won't work anymore (List[Option[A]] => A
can't be resolved from List[A] => A
and Option[A] => A
). Also, powerful implicits like those might interfere with other implicits. For instance, Some("foo") + "bar"
is valid Scala because of Predef.StringAdd
implicit class and Option[String] => String
simply wouldn't be called.
I'll keep trying to come up with something and, of course, all the ideas are highly appreciated.
To make your solution work, we would need to have implicit view for every instance of monad/functor separately, e.g.
Option[A] => A
(with the way things currently are) and somehow make it available only within context/workflow block so it won't mess with other code.
Well, providing the views for every functor is not that high a price to pay :) As for the messing, I believe @compileTimeOnly
should keep them rather sane.
Composition of functors/idioms won't work anymore (
List[Option[A]] => A
can't be resolved fromList[A] => A
andOption[A] => A
).
That’s true. Still can be done with an “explicit” implicit though, which is of course ugly.
For instance, Some("foo") + "bar" is valid Scala
This is an interesting matter. How do you know anyway—with the current implementation—that it’s not point(Some("foo").toString + "bar")
? However, if part of the code typechecks like this and then fails typechecking somewhere midway because of wrong inferred types, things will break.
By the way, I just realized that the new macro annotations expand before type-checking! So conceptually a ported version could look like so:
@workflow[Option[_]] val x = Some(3) + 5
However, I just tried to pass the type argument through and it didn’t work quite well. It’s also unclear how to pass normal arguments, if we want @wokflow(zipList)
. @xeno-by, any thoughts on this?
How do you know anyway—with the current implementation—that it’s not
point(Some("foo").toString + "bar")
?
That's how rewriting works — we start with the most nested subexpression. Basically, the whole expression never gets to be typechecked as is (unless we walked through all of its subexpressions and left them untouched). In this case, Some("foo")
is typechecked before Some("foo") + "bar"
, and rewritten with synthetic String
argument to map
.
What exactly didn't work?
@xeno-by
class Workflow[A] extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro WorkflowMacros.macroTransform[A]
}
object WorkflowMacros {
def macroTransform[A: c.WeakTypeTag](c: MacroContext)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
println(weakTypeOf[A])
c.Expr[Any](Block(DefDef(Modifiers(), newTermName("x"), List(), List(), TypeTree(), Apply(Select(Literal(Constant(3)), newTermName("$plus")), List(Literal(Constant("asd"))))), Literal(Constant(()))))
}
}
scala> import foo.Workflow
import foo.Workflow
scala> @Workflow[Int] val x = 4
x: Int = 4
So the annotation is not doing anything at all. Without the type arg it works fine:
class Workflow extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro WorkflowMacros.macroTransform
}
object WorkflowMacros {
def macroTransform(c: MacroContext)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
//println(weakTypeOf[A])
c.Expr[Any](Block(DefDef(Modifiers(), newTermName("x"), List(), List(), TypeTree(), Apply(Select(Literal(Constant(3)), newTermName("$plus")), List(Literal(Constant("asd"))))), Literal(Constant(()))))
}
}
scala> import foo.Workflow
import foo.Workflow
scala> @Workflow val x = 4
x: String = 3asd
Another question is if we have a macro annotation with arguments (class Workflow(a: Int) extends StaticAnnotation
), how do we access them in impl
s?
Another issue is that we need higher-kinded type as the argument. I seem to remember, that c.WeakTypeTag
didn't work for that.
That's how rewriting works
I understand that, but what happens in this example?
workflow[Option] {
Some(Success(42)) getOrElse "foobar"
}
is it option.map(x$1 ⇒ x$1.getOrElse("foobar"))(Some(Success(42))
or option.point(Some(Success(42)).getOrElse("foobar"))
? Maybe there’s no issue and I just need to sleep for a while :)
The first one. Method invocation is represented somewhat like Apply(Select('Some(Success(42))', 'getOrElse'), List("foobar"))
, so again, we start with the most nested subexpression: 42
, then Success(42)
, then Some(Success(42))
, that gets rewritten as synthetic x$1 : Success[Int]
and getOrElse
method will be taken from Try
class, not Option
.
Whereas
workflow[Option] {
Some(Failure(new Exception)) getOrElse "foobar"
}
will produce Some("foobar")
.
@stanch
a) That's a bug: https://github.com/scalamacros/paradise/issues/2. Probably would be easy to fix, but I don't have much time for that. If you need this fix badly, I can carry it out tomorrow. Otherwise, let's keep it on-demand.
2) c.prefix
@aztek It makes sense! Thanks for the explanation.
@xeno-by re 2) Lol now I remember that you suggested the same when we submitted https://issues.scala-lang.org/browse/SI-7736 :) So I’ll also try to get the type args with it and see how it goes.
Upd: even if I just put class Workflow[A] extends StaticAnnotation
and never mention A
again, the macro still does not expand.
Check this out! https://github.com/stanch/scala-workflow/commit/506fcc75196406ce59ea5781630ea673768f8b50#L0R49 https://github.com/stanch/scala-workflow/blob/e0325cc641fd4ed952337a0aec797d2777a0247a/project/Build.scala
scala> @workflow(option) val x = Some("f") + Some("o")*2 + Some("bar")
x: Option[String] = Some(foobar)
However:
scala> @workflow(option) val x = Some(2) + 3
error: ambiguous reference to overloaded definition,
both method + in class Int of type (x: Char)Int
and method + in class Int of type (x: Short)Int
match expected type ?
<console>:10: error: type mismatch;
found : Int(3)
required: String
@workflow(option) val x = Some(2) + 3
^
I am curious if this is an issue specific to how the tree is parsed/checked, e.g. https://github.com/stanch/scala-workflow/blob/506fcc75196406ce59ea5781630ea673768f8b50/core/src/main/scala/scala/workflow/package.scala#L195
Oh, cool!
I'm not sure about the error. Looks like the whole expression is typechecked at some point (you'll get the same error message without the annotation). Generated code is correct, so rewriter/typechecker worked well.
I think the first error happens somewhere in rewrite
. The second one is just due to the failed annotation expansion. I will look into this today. We can also try to revive the context
and $
duo in this fashion:
@context(option)
def foo(bar: Int) = {
@$ val a = 4 // a = Some(4)
a
}
Of course the annotations are not as good-looking as functions, and you can put them into fewer places. But on the other hand, I believe we can do
class async extends workflow(future)
@async val x = 5 // x = Future(5)
which is not bad!
Upd: The shown implementation of @async
does not suffice because of how c.prefix
works. I’m looking into it.
@xeno-by, after some investigation this looks very suspicious.
println(util.Try(c.typeCheck(q"4+")))
...
error: ambiguous reference to overloaded definition,
both method + in class Int of type (x: Char)Int
and method + in class Int of type (x: Short)Int
match expected type ?
Success(4.<$plus: error>)
The Success
part tells us that no exception was thrown. Could you check this? c.typeCheck(q"val x: String = 5")
throws just fine.
Most likely a bug. Would be great if you could submit it. As a workaround, after getting a success you could additionally check whether the resulting tree is erroneous.
On Sunday, 1 September 2013, stanch notifications@github.com wrote:
@xeno-by, after some investigation this looks very suspicious.
println(s"typechecking ${scope wrapping tree.duplicate}") println(util.Try(c.typeCheck(scope wrapping tree.duplicate))) ... typechecking { val arg$1: Int = $qmark$qmark$qmark; arg$1.$plus } error: ambiguous reference to overloaded definition, both method + in class Int of type (x: Char)Int and method + in class Int of type (x: Short)Int match expected type ? Success({ private[this] val arg$1: Int = scala.this.Predef.???; $iw.this.arg$1.<$plus: error> })
The Success part tells us that no exception was thrown. Could you check this? c.typeCheck(q"val x: String = 5") throws just fine.
— Reply to this email directly or view it on GitHub.< https://github.com/notifications/beacon/nHKVk3PF9lWIv8gP84yu9cLXl26slIG0NQ4V3nxnbsMOUVYPkqBZaUu54kC1tVcz.gif
Is that it https://issues.scala-lang.org/browse/SI-7461 ?
Is that it https://issues.scala-lang.org/browse/SI-7461 ?
Yes, with a difference that compilation does not fail.
@xeno-by should I submit a new one, or we reopen this one?
Dunno yet. Let me take a look first
@stanch The example from SI-7461 works for me, so maybe there's a problem with the particular tree that you're emitting. Could you provide a minimized reproduction and reopen the bug with it?
I know you are both watching that bug, but just in case—the reproduction is here https://github.com/stanch/paradise-bug.
And now to something completely different. I realized this morning that by macro-annotating the enclosing method and defining workflow
, context
and $
as stubs, we would probably be able to keep the original API untouched. The macro would collect all stubs and replace them with rewritten trees. @aztek what do you think? In general, do you plan to merge my port here or should we keep it as a separate repo with new docs?
@stanch Could ypu please put up some examples of macro-annotated workflows? Or, even better, revise some of the examples in the current specs.
I don't think anybody actually uses scala-workflow
, so preserving API (even as simple as this one) shouldn't be a concern.
I don't want to abandon the current implementation completely, so I'd rather have two branches here — one with release version and another with old untyped macros. If you get anything useful with annotations, it's fine with me to merge it here. There're parts of the code independent to macro interface (like instances and composition classes, that I'll push soon), that should be easy to sync.
@xeno-by Anyway, do you think macro annotations will make their way to release? It's just it looks like we would switch untyped macros implementation with another untyped-at-heart implementation. Is it possible, that macro annotations will be eventually discarded for the same reasons untyped macros were?
I will put the examples in readme.md
in my fork as soon as [1]—and hopefully [2]—is fixed. As for the usage, well, besides 90 stars that you already have I am actually planning to use it in https://github.com/stanch/macroid and dependent projects. Sometimes I even think it should be a SIP! workflow
is way ahead of for
, effectfully
and F# computation expressions :)
I’ll be happy to submit a pull request with something useful, probably having a separate branch is indeed the best solution.
[1] https://issues.scala-lang.org/browse/SI-7461 [2] https://github.com/scalamacros/paradise/issues/2
1 can be worked around, can't it?
@xeno-by Maybe it can. To maintain the code here [1], I need a way to get the error description from the erroneous tree. I did a quick search through auto-completion with no success...
Well, that's impossible from what I know. I'll try to see what can be done this weekend.
@stanch To clear things up about the typeCheck(tree, scope)
function:
tree
is lifted to a current workflow (with bindings of local variables, forming a scope
).Success(<type>)
(so later we decide if <type>
corresponds to the type of the workflow), Success(EmptyTree)
(when we know for a fact, that tree
is not lifted) and Failure(e)
(when there's a type error inside the lifted code, e.g. $(Some("foo") * Some("bar"))
; no error will be produced if the lifted code is correct)."follow this method with _'"
), overloaded functions ("ambiguous reference"
), package names ("package ... is not a value"
) and object constructors ("missing arguments for constructor"
, "too many arguments for constructor"
). In all those cases the perfect strategy would be to perform a deeper analysis of the code to get the actual type, but so far it is assumed that it won't correspond to workflow type, so EmptyTree
is returned.@aztek Thanks! I kinda understood this, but your explanation makes it more clear. I am working on my idea with stubs and would like to start a discussion on the future API.
The current API is:
workflow
context
+ $
It seems that for the future API we have the following options:
Workflow annotations on ValDef
s and possibly DefDef
s
@workflow(option) val x = Some(1) + Some(2) // val x = Some(3)
This one seems quite useful, especially if we provide shortcuts for common things such as Option, Try, etc. The meaning is rather explicit, which is good.
Method transformations that allow to use old API
@enableWorkflow
def x(a: Int, b: Int) = {
val y = context[Option] { $(Some(2) + Some(3)) }
val z = workflow[Option] { Some(4) + Some(5) }
}
As I mentioned before, this is based on @compileTimeOnly
stubs.
Same as above, but also providing default context
@workflowContext[Option]
def x(a: Int, b: Int) = {
val y = $(Some(2) + Some(8)) // uses Option
val z = workflow[Try] { 3 / 0 }
}
Personally I think keeping everything would be a mess. So I suggest to select the most reasonable subset of the following (usage-wise):
@workflow
@enableWorkflow
+ workflow
@enableWorkflow
+ context
+ $
@workflowContext
+ $
@workflowContext
+ workflow
(i.e. @workflowContext
also acts as @enableWorkflow
)@workflowContext
+ context
+ $
(ditto)I am leaning towards @workflow
(for both ValDef
s and DefDef
s) and @workflowContext
+ $
. This is more of a gut feeling, but the reasons are probably 1) @workflow
and workflow
is too much; 2) I doubt there will be two different context
s per average function—if there are, one can always use @workflow
; 3) this keeps focus on annotations, since we have to deal with them anyway.
Please let me know what you guys think.
@xeno-by I just found a rather upsetting issue:
val x = Some(3)
@workflow(option) val y = "asd" + x
fails during annotation expansion with not found: value x
. Do you have any ideas on how to overcome this?
It was a deliberate decision to restrict c.typeCheck
in macro annotation context to outer scope. I was afraid that seeing yet unexpanded members during typechecking would create confusion. It looks like it's still going to bite us. Needs some serious pondering...
Damn... It started so nice... But what about
@workflowContext(option) def test = {
val x = Some(3)
$( "asd" + x )
}
?
This one doesn’t work either. Should I change Transformer
owner? https://github.com/stanch/scala-workflow/blob/master/core/src/main/scala/scala/workflow/package.scala#L92
shortcuts for common things such as Option, Try, etc.
This is very neat. I wish I had those in the first place, so this would work:
val frp = context(frpWorkflow)
fpr {
...
}
There're a few minor difficulties in implementation of @enableWorkflow
and @workflowContext
, afaiu. We gotta traverse through the expression and find context
/$
applications, how to reliably recognize that they're not locally-defined functions? Also, what if there're nested $
/context
applications or nested annotations? How should those be expanded? Should we keep a stack of workflow context definitions to resolve an appropriate workflow for each level of nesting? This is how it works right now.
scala> context(list) { context(option) { $(Some(10) + 2) }}
res0: Option[Int] = Some(12)
Not a major concern, certainly, but a thing to remember.
As I mentioned before, this is based on
@compileTimeOnly
stubs.
I'm not sure what you mean by that. We don't need implicit conversions for macro-annotated implementation, do we?
A few general thoughts on the topic. Originally, separation to workflow
and context
/$
were introduced because of partial support of Scala features (right now it's just function applications and val
s, basically). Since it is unlikely that the whole Scala will be supported any time soon, I think a separation of some sort must be kept.
Speaking of supported subset of Scala, I would really like to see if
-expressions and pattern-matching working, so workflow
macro would be more usable. With the present of @workflow
annotation that would be even more relevant. Consider this example:
def eval: Expr ⇒ Env ⇒ Option[Int] =
context(function[Env] $ option) {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ $(value)
case Add(x, y) ⇒ $(eval(x) + eval(y))
}
With supported pattern-matching that could be
def eval: Expr ⇒ Env ⇒ Option[Int] =
workflow(function[Env] $ option) {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ value
case Add(x, y) ⇒ eval(x) + eval(y)
}
And even better with the annotation
@workflow(function[Env] $ option)
def eval: Expr ⇒ Env ⇒ Option[Int] = {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ value
case Add(x, y) ⇒ eval(x) + eval(y)
}
I'm not sure, how long will it take me to support it, but if we agree to this interface, that would certainly become a highly important ticket.
I support your choice of @workflow
/ @workflowContext
+ $
(perhaps, call it just @context
?), but we should check with actual use-cases, starting, I guess, with the examples from the Readme.
We gotta traverse through the expression and find context/$ applications, how to reliably recognize that they're not locally-defined functions? Also, what if there're nested $/context applications or nested annotations? How should those be expanded?
Take a look at my hack here. Basically we can check that the immediate parent of $
is the current annotation. As for differentiating $
from local functions, I believe, there’s an API to typecheck and get a fully qualified term name, however, you are right in that import workflow.{$ ⇒ dollar}
will not work. But I don’t think this problem is solved in either scala-async
or effectful
. So right now this boils down to the user being nice.
I'm not sure what you mean by that. We don't need implicit conversions for macro-annotated implementation, do we?
See here. What @compileTimeOnly
does is if any $
s remain after macro expansion, compilation will fail.
The example you showed could be as well done as
@workflowContext(function[Env] $ option)
def eval: Expr ⇒ Env ⇒ Option[Int] = {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ $(value)
case Add(x, y) ⇒ $(eval(x) + eval(y))
}
It actually illustrates my point that context
is likely going to span the entire function, unless the function is really big and messy, but then again it’s the user who is to blame ;)
I agree about studying the use cases. In fact I have another one up my sleeve, something like https://github.com/raimohanska/ui-thread-as-monad, which will soon be incorporated in https://github.com/stanch/macroid. So @ui
annotation will be quite handy! I will have some frp
there too.
I suggest that we make a public gist and start adding the examples there (will update this message when I create one).
Sorry, accidentially clicked "Close"...
So here is the gist: https://gist.github.com/stanch/6421641 I think everything looks reasonable :) However, right now the implementation is blocked by the following:
Limited typechecking scope in macro-annotations:
It was a deliberate decision to restrict c.typeCheck in macro annotation context to outer scope
@xeno-by, is there hope? :)
Thanks for sharing! Looks nice. Perhaps, we can also support annotations of object
declaration? val x =
might look confusing in some cases, imo. I changed a few things: https://gist.github.com/aztek/6421797, what do you think?
Oh, sure! I was thinking to allow annotation of classes (which would influence the code it their bodies), but somehow forgot the objects. Not sure why I had
@context(writer[Log])
val Writer(result, logEntries) = ${
...
either.
@aztek These tests pass: https://github.com/stanch/scala-workflow/blob/master/core/src/test/scala/scala/workflow/WorkflowContextSpec.scala#L6 Unfortunately I couldn’t enable more elaborate ones due to the points I had mentioned above.
@xeno-by My apologies, https://issues.scala-lang.org/browse/SI-7461 is of course solved in 2.10.3-RC1!
@stanch Would you like to take on https://github.com/scalamacros/paradise/issues/2? :)
I am sort of trying already, but can’t figure out where’s the annotation stuff there :) Upd: using the power of search... should be something starting from https://github.com/scalamacros/paradise/commit/a57ed89194331e1212645c28630ab37e8626e3fc, right?
@stanch Annotation expansion logic lives mostly in Namers.scala: https://github.com/scalamacros/paradise/blob/3410f7d0312dcfec84cc53405c26e33fc8815637/plugin/src/main/scala/org/scalalang/macroparadise/typechecker/Namers.scala.
I thought I would report my progress on this:
Current issues:
[ ] Neither @workflow
, nor $
can depend on adjacent declarations. That is,
val x = Some(3)
@workflow(option) val y = "asd" + x
fails with not found: value x
.
@opt Some((x, y)) = (4 + Some(8), 5)
are not supported. I need to study how they are desugared more closely.@stanch Actually, if you're enthusiastic about the first issue, you could also take a look. The logic that causes problems resides over here: https://github.com/scalamacros/paradise/blob/3410f7d0312dcfec84cc53405c26e33fc8815637/plugin/src/main/scala/org/scalalang/macroparadise/typechecker/Namers.scala#L506.
@xeno-by thanks for the heads-up! I’ll take a look. Now I also understand why
@context(option) def test = {
val x = Some(3)
$( "asd" + x )
}
doesn’t work. I’m calling c.typeCheck
on $(...)
alone, where x
is of course not known. The current idea is to replace all $
s with @workflow
-annotated synthetic val
s and then rely on the above issue being fixed.
What's the current status of this?
@milessabin It seems that the remaining issue is whether we should always return newTyper(context)
here, regardless of the scope. With this change, the new scala-workflow
interface works as expected, however 1) there’s some weird compiler crash when annotations are used in a specific way inside FlatSpec
; 2) all tests in paradise
pass, obviously except the one that checks typer scoping rules. So the task is essentially to verify that the FlatSpec
crash is not due to the change I made in paradise
and also try to fix it anyway. Besides this, we have to persuade @xeno-by to allow that change, but I don’t think he is that much against it :) So the problem is that right now I don’t have enough time to pursue this :(
Today I tried to implement scope-respecting typecheck for macro annotations and ran into a number of troubles, which require careful thinking. Would be interesting to hear your opinion.
1a) First of all, why did I originally decide to allow macro annotations to only see enclosing lexical scope, not the current scope? Well, there's a simple reason. If we allow access to current scope, we'll be potentially letting the programmers to see yet unexpanded trees, e.g. consider this snippet
{
@foo class X
@bar class Y
}
1b) Imagine that both annotations expand a class into a class and a companion, and in the implementation of @foo
we decide to take advantage of that and do c.typeCheck(Ident(TermName("Y"))
. Unfortunately, by the time when @foo
expands, @bar
hasn't yet been expanded, which means that the typecheck will fail. If we swap the order of declarations, everything will work fine. Awkward.
1c) There's something that's even worse. Imagine that @bar
expands the underlying class into just a companion, destroying the original class. Then, if @foo
does c.typeCheck(q"???: Y").tpe
and uses its results to generate code, that might lead to all sorts of strange compilation errors, compiler crashes or bytecode corruptions (depending on your luck).
2a) Allright, so we have 1b, which is weird, but we can live with that, trading the awkwardness of c.typeCheck
not seeing anything for the awkwardness of c.typeCheck
not seeing stuff declared below, and we have 1c, which is outright wrong, and we shouldn't let it ever happen. What can we do here?
Well, we can embrace the top-down nature of processing definitions and say that macro annotations can see their lexical scope, but right until the expansion point. Everything below won't be visible in c.typeCheck
. So in our example above, @foo
would only see class X
, but not class Y
(and definitely not object Y
). That sounds like something we can work with, and that's what I tried to implement today. Unfortunately, it ended up being very tricky.
2b) Imagine for a second that our original example is not a block, but rather is a template, e.g. something like:
object M {
@foo class X
@bar class Y
}
Then our calls to c.typeCheck
from within @foo
will immediately hit an implementation restriction of scalac - cyclic reference errors.
The problem is that macro annotation on M's members are expanded when scalac figures out the signature of M during so called lazy completion of M's type. An interesting thing about completion is that it locks the completee while it runs. If someone touches the completee (e.g. requests its signature) during an ongoing completion, a cyclic reference error will be thrown. Unfortunately, touching M is what c.typeCheck
has to do, because in order to typecheck something in the lexical context of a class, we need to know the signature of that class (which is the list of members + the list of base classes).
2c) If we apply enough care, we can work around this restriction (I will omit the details in order not to bore down an already tired reader), allowing macro annotations expanding within templates to see partially constructed templates without tripping cyclic errors, but then we have another problem.
When allowing blocks to be observed in partially expanded state, we were more or less fine, because the only ones who could make such observations were macro annotations expanding within that block. However, with templates we open up a much bigger loophole.
If @foo
expanding in the second example goes out to explore the outer world, it might look into other classes and modules and request a type signature of a not yet completed method. Doing that might trigger typechecking of that method (in case when the return type of the method is not specified explicitly), which might launch other macros, which might want to ask M's type signature.
Previously, such macros would trip a cyclic reference error, and rightfully so, because they would be trying to access partially constructed type of M. What will they see now? Right, they will see partially constructed M. Not good at all.
2d) We've been hit hard by cruel fate, but that's not all yet. After some time, I accidentally realized that the tradeoff proposed in 2a is more ugly than initially advertised. Consider the following snippet:
{
@foo class X
...
<a lot of definitions>
...
object X
}
When one of the companions is affected by a macro annotation, both are expanded simultaneously. But then where does our expansion point lies that delimits what @foo
can and cannot see?
Is it immediately after class X
? No, that doesn't make much sense, because then we might not be able to even typecheck object X
as it might refer to the definitions in-between.
Is it immediately after object X
? Okay, that's reasonable, but then what do we do if one of the in-between definitions is itself annotated? If we disallow that expansion, that might jeopardize object X
. If we allow that expansion, then it will get to see unexpanded class X
, which as we figured out in 1c is highly undesirable. Stalemate.
3) Is that the end? Not really. Even if the situation is grim, it's all about tradeoffs.
For example, for more than a year I didn't know how to implement macro annotations at all, but a couple months ago I figured out just the right amount of restrictions to put in place in order to make things work.
So now we shouldn't give up and should just try our best to find a better/smarter set of tradeoffs :)
Thanks for sharing! Regarding 2d, how crucial is the logic of silmultaneous expansion of companions? If it would follow the same rule of moving expansion point, you'd be just forced in this example to move object X
before @foo class X
, which doesn't seem so bad.
Hi,
Do you have any plans for migration from now deprecated untyped macros? I am not still sure if this is going to work, but maybe one could provide
implicit
views fromMonad[A]
/Functor[A]
toA
(e.g. with a@compileTimeOnly
annotation), so that the code typechecks, and the macro would just disregard them, replacing withmap
s orbind
s. What do you think?Nick