Open soronpo opened 6 years ago
/cc @milessabin
It sounds like you'd want fooInt(1)
to infer fooInt[1.type](1)
. But what about def fooInt2[T <: Int](t : T, u: T) : T = t
. Moreover, inferring different types for some type variables might break programs elsewhere. Given how Scalac's inference works, currying fooInt2
seems enough: def fooInt3[T <: Int](t : T)(u: T) : T = t
, but even with a smarter local inference you can have def fooInt3[T <: Int](t : T): T => T
and use the result later.
Since T => T
is invariant in T
, if you make T
either wider or narrower, you can construct a program that breaks under the new semantics.
For a trickier case in Dotty, see https://github.com/lampepfl/dotty/issues/4032 and https://github.com/lampepfl/dotty/pull/4059, though that was reworked in https://github.com/lampepfl/dotty/pull/4080 — not sure how robust that is though.
But what about def fooInt2[T <: Int](t : T, u: T) : T = t
How is it different than the following code?
trait Foo
object Bar extends Foo
object Baz extends Foo
def justFoo[T <: Foo](t : T, u : T) : T = t
justFoo(Bar, Baz)
I don't see why see why literal singleton widening should be semantically different than singleton object widening.
Currently I have a macro that enforces this for singleton-ops. It checks the tree of an argument instead if the type, and the tree preserves the true narrowness. Why shouldn't this be a language feature?
The code looks like:
def foo[T <: Int](t : T)(implicit g : GetArg0) : g.Out = g.value
I don't see why see why literal singleton widening should be semantically different than singleton object widening.
Did I say the semantics makes sense? No. But I'm talking about compatible changes. Your macro is compatible because it doesn't modify existing behavior. If somebody changes it now, would somebody volunteer to fix any community build breakage?
FWIW, historically, you needed object's singleton types to use objects for modules. And even now, for a generic object Foo
, the only supertype of Foo.type
is AnyRef
(and maybe Serializable
and other stuff) — not much to widen to.
That for me felt like something that should have been changed as part of the SIP. I'm not hung on changing the current semantics. I can accept other alternatives. One that I suggested was a @narrow
annotation for type arguments.
def foo[@narrow T <: Int](t : T) : T = t
@smarter suggested a precise
keyword instead. I have no preference.
No objection to other keywords.
I think such annotations might be preferrable to Singleton
anyway — right now, in Dotty, if A <: Singleton
and B <: Singleton
then A | B <: Singleton
by the usual rules, which seems to make little sense (even though it wasn't a problem in Scala 2). There's a difference between @narrow
(don't widen) and a potential @singleton
(require a singleton type). And I wondered elsewhere if having @realizable
would be useful.
@alexknvl has caused problems with Singleton
being used as an upper bound rather than as a property of the type. It makes the Scala type system not a lattice (at least if we want a.type | b.type !< Singleton
). I recall .isInstanceOf[Singleton]
taking up quite some discussion during SIP-23, as well; I don't know of any use for it other than as an upper bound of type parameters. (Please correct me.)
When would @narrow
/@precise
differ from @singleton
?
When would
@narrow/@precise
differ from@singleton
?
As I understand it:
@singleton
- Only singleton objects / literals are accepted which match the upper bound@narrow
- Any objects are accepted which match the upper bound, while the given type is as-narrow- -as-possible.And while were at it, I would also add @literal
which only accepts literals matching the upper bound.
@hrhino right, that's what I was hinting at, sorry for forgetting attribution to @alexknvl I just had forgot.
As I understand it:
Yeah that was the idea, which was built for the use-case in the OP, since @soronpo's OP is essentially asking for something like
def fooInt[@narrow T <: Int](t : T) : T = t
val x: Int = 3
fooInt(3) // Infers fooInt[3](3)
fooInt(x) // Infers fooInt[Int](3)
Given that this is just "don't widen the argument", it's not unreasonable. But I'm now wondering whether the annotation should be placed on the argument whose type you want inferred:
def fooInt[T <: Int](x : T @narrow) : T = x
// semantics here:
// given a call `fooInt(t)`, infer the type `T` of `t` using the typer without an expected type (that is, in "synthesis" mode) and without any widening, check if the inferred type respects the bounds for type variable `T`, and output `fooInt[T](t)`
But here's a trick question: what's the type inference semantics of def bar[@narrow T <: Int](xs: List[T])
, and more generally for arbitrary polymorphic method types where T
never appears naked? Should that be rejected? What's the answer for the current @singleton
?
List[T]
preserves its type and narrow does nothing for it. When creating List[T]
you are responsible to create it as either List[1]
or List[Int]
.
The point is, I'm constructing some object somewhere which is either singleton, literal or otherwise. When I feed it as an argument to the definition, I want the narrowest type to match the bounds to be preserved. That's it.
But here's a trick question: what's the type inference semantics of
def bar[@narrow T <: Int](xs: List[T])
, and more generally for arbitrary polymorphic method types whereT
never appears naked?
that's a rough question indeed. In the absence of variance it's probably meaningless, because the type of xs
will fix T
. With variance it becomes interesting:
def bar[@narrow T <: Int](xs: List[T]): List[T] = xs
bar(1 :: Nil) // is this List[1] or List[Int]?
def baz[@narrow T <: Int](f: T => Unit): T => Unit = f
baz((_ : 1) => ()) // this would have to be 1 => Unit
// is there a case where contravariance makes a difference?
def bar[@narrow T <: Int](xs: List[T]): List[T] = xs
bar(1 :: Nil) // is this List[1] or List[Int]?
There is no conflict. The constructor for List
's ::
has no singleton/narrow annotation. Therefore, 1::Nil
is just a List[Int]
, so @narrow
has no effect here.
The point is, I'm constructing some object somewhere which is either singleton, literal or otherwise. When I feed it as an argument to the definition, I want the narrowest type to match the bounds to be preserved. That's it.
Well, in your example that's not "the narrowest type to match the bounds" — that's a nice declarative specification that produces List[1]
since it's narrower than List[Int]
. "Typecheck the argument without an expected type, and then don't widen it" is a less declarative specification, but seems closer to what you actually request on your examples. That's probably still oversimplified, but both Scalac and Dotty use a (variant of) bidirectional type inference anyway.
So maybe the correct word is preserve
in the general context, and it should be applied on the argument value as you suggested.
So
def foo[T](@preserve t : T) : T = t
foo(1)
foo(1 :: Nil)
is equivalent to:
def foo[T](t : T) : T = t // or def foo[T](t : T) : t.type = t ?
final val one = 1
final val oneL = 1 :: Nil
foo(one)
foo(oneL)
@soronpo I picked up this issue from https://github.com/lampepfl/dotty/issues/4944. No progress for a long time so I am thinking maybe your macro will solve my use case. Does it work for RC1 and later? Can I use it in my code (license)? If so, were can I access it?
TIA
The macro madness is currently only implemented for Scala 2. It's relying on implicit conversions that Odersky is fan on eliminating. Don't know what the future of it will be. I think the language should change to allow us express 'exact' types without this hassle.
You can find the macro code here:
https://github.com/DFiantHDL/DFiant/blob/master/internals/src/main/scala/DFiant/internals/ExactType.scala
https://github.com/DFiantHDL/DFiant/blob/master/internals/src/test/scala/DFiant/internals/ExactTypeSpec.scala
Consider the following behavior, post SIP23 implementation.
The above forces the user to express two different functions, if the narrow literal type information needs to be preserved, in case a literal is used.
Can this be considered a bug? To resolve this we need to either change the widening semantics or add some kind of language construct that forces the narrowed type.
also discussed briefly at: https://gitter.im/typelevel/scala?at=5ad895596bbe1d273902766d