Open lefou opened 8 months ago
FTR, here is my mentioned proposal.
@lefou wrote on Sept 8, 2003:
I've been playing around with using directive for a while, mostly in the context of Mill build scripts, and also had many discussions including those on the first Tooling Summit this year.
I suggest, to split the "using directives" (as known in it's current form) into three parts: the comment marker, the context marker and the directive, and discus each part, it's purpose and it's potential for standardization separately.
The comment marker
//>
. This is undoubtedly agreed, that is nice. It's a comment, so it's transparent to the compiler. It's easy to recognize and survives most code formatting attempts. This is a nice candidate for a "standard".The context marker, currently the "using" keyword. There was lots of discussion about the "using" keyword, it's name and also the fact whether it's needed or not. I'd like to associate the context or the intepreting tools with it. Although a more generice "scala-cli" whould be more clear, "using" is now associated with Scala CLI and will do it. For other tools, there should be a different marker like "mill" or "ammonite"/"amm". That way, tool-specific configuration can co-exist and a script can be compatible to multiple tools/runners.
The context-specific directive or configuration, typically some key-value like tool/context specific configuration. Due to it's tools specific nature, a more broader standard makes not much sense, and will make the evolution of the various tools harder.
Why do we need context specific configuration?
Different tools have different features and properties. They also have different requirements, different defaults and different usability concepts. It's best to leave the control over the configuration of these details to the respective tools. Instead, let's make sure that tools can co-exists and scripts can be used in various environments simultaneously. With my proposal to introduce a context marker (or handle the "using" as marker for Scala-CLI only), it is possible to apply different configurations for different tools to the same script.
Showcase
Take the
build.sc
for example. It is an Scala script file and many different tools (Ammonite, Scala-CLI, Mill, xxx worksheet) can edit, format, execute it, but all will intepret it slightly different. Lets say I want to use theos-lib
dependency, in Ammonite or Mill I can just use it, as it is already provided, whereas in Scala CLI I need to import it first (with//> using dep
). If said script requires a minimal version of Ammonite or Scala-CLI to function, I could easily configure that due to the distinct context selectors (//> using version
vs//> ammonite version
).Example: A script with configuration for Scala-CLI and Ammonite
//> using dep "com.lihaoy::os-lib:0.9.1" //> ammonite version 3.0.0 println("Current path: ${os.pwd}")
At a first glance, when opening this file, I can see it is supposed to run with Scala-CLI and/or Ammonite. It uses the os-lib dependency and requires Ammonite in version 3.0.0.
See also my other post where I presented this idea: com-lihaoyi/Ammonite#1372 (comment)
I think support for a minimal required version would be nice. Mill should fail if it does not match.
//> using mill.minVersion 0.11.8
+1 for moving away from the $ivy imports. I tripped over this a few times with IntelliJ, where it decided to 'clean up' imports and messed up the $ivy imports.
+1 for moving away from the $ivy imports. I tripped over this a few times with IntelliJ, where it decided to 'clean up' imports and messed up the $ivy imports.
A tip: Recently I have discovered, if you place the $
-import after the other regular imports, IntelliJ seems to not remove them when organizing the imports. At least, in my case, it was constantly removing the import $meta._
from a build.sc
whenever a new import was about to be added, and I could prevent it by placing the import $meta._
after all other imports.
I am also in favour of this - it would mean no custom parser needed for the scala 3 port (unless we would keep the old way specifically for deprecation messages)
I'd like to preserve magic imports for backwards compatibility, at least for the first scala 3 release. It's fine to break stuff in a breaking release, but leaving the ivy imports in place for at least one or two releases should help smoothen the migration considerably
Long term I think we should consider alternatives to magic imports, but let's decouple it from the Scala 3 upgrade
We could introduce using directives and deprecate imports in 0.12
, so that we could start clean in Mill 0.13
with Scala 3.
Possible, but I would like to spend a bit more time exploring the design space for this area.
directives work great to replace ivy imports, are kind of awkward to replace file imports. It's plausible we could find some solution, e.g. by making file imports go away by auto discovering files, but that may require other changes e.g. changing the file extension from .sc to .mill to avoid picking up random scripts. And I think there's some interesting stuff we could do with a more structured JSON/YAML header format, which could precisely match our JSON serialization format and generalize to arbitrary configs in a way that directives dont
That's not to say I think we should never do dirextives, but from a timeline perspective I think we should not couple it to the Scala 3 upgrade, as this area will take more time to work out the details
@lihaoyi Could you elaborate on the difficulties you see with $file
imports? Since we already preprocess them and replace with some dummy import
statement, it should be even easier as we don't need to change the script any longer. Their compiled binary representation is on the classpath and the actual import
importing the package or class should work without any change.
Regarding the more complex encoding like JSON/YAML. I think this was discussed for using directives many times, finally it was dropped because of complexity. There is still no good editor support for such file headers. And as a result it's harder to properly edit, format and get the header right. In contrast, the current using directives are simple key value pairs and that's all. Everything else is parsing the value, which is analog to e.g. how CLI args work or sysprops or env values.
I also envision to use a simple
//> using mill.version 0.12.0
and want to parse this from millw
, so that there is no longer a .mill-version
file needed. As long as the format is that trivial, parsing in bash
, cmd
or PowerShell is easy.
Also, it works for scala-cli, which doesn't want to be a build tool yet its rich building capabilities are completely controlled via this simple format. We pick users up with something they're already familiar with or which is easy to learn.
We inherited the various
$
-imports likeimport $ivy
orimport $file
from Ammonite. It turns out, the Scala compiler isn't very fond of us using the$
symbol in object and class names and newer versions starting from3.4.0
enforce this more strictly.See also:
I think a nice alternative to these magic imports could be something similar as the using directives as known from Scala CLI.
From a compiler perspective, they are just comments, so we don't need to mangle scripts just for the sake of replacing directives with something regular. Something we needed to do before with the
$
-imports.My preferred usage would look like this:
But since my proposal to make them tool/context specific wasn't accepted, the "standard" way to support tool specific key-value pair would be to encode the tool/context in the key. It would look like this:
The pros for following this standard would be, that IDEs, formatter and syntax highlighters might handle them better. (Which is already the case at the time of writing this. GitHub renders my first example just grey, but the second nicely colored.)