Closed MaximeKjaer closed 1 month ago
Thanks for reporting! Is Scala.js 1.1.0 not binary compatible with Scala.js 1.0? If they are both binary compatible, we should just upgrade to Scala.js 1.1.0.
I've just checked the release notes as they have the answer to my question. I think it is safe to just upgrade the Scala.js version in the 1.x bridge series.
At least it won't break anything that is not already broken today.
But the way bloop handles its dependency on the Scala.js linker has serious flaws already now:
scalaJSVersion
used by the project. Problems with the current scheme include:
scalaJSLinkerConfig
and scalaJSModuleInitializers
, instead it should just take the value of those two settings from the build and roll with those. bloop currently reconstructs its own StandardConfig
and Seq[ModuleInitializer]
based on some arbitrary config that is specific to bloop, and is not in sync with what the original build tool does. Problems with the current scheme include:
scalaJSModuleInitializers
scalaJSLinkerConfig
done in users' builds and that is not handled by bloopIn general, a Scala.js codebase is entirely defined by (per sbt triple Project / Configuration / (fastOptJS and fullOptJS)
):
fullClasspath
scalaJSLinkerConfig
and scalaJSModuleInitializers
scalaJSVersion
, which determines the version of scalajs-linker
These are the specific things that bloop should take from the build tool. It should not try to reconstruct scalaJSLinkerConfig
, scalaJSModuleInitiliazers
and scalaJSVersion
other than through those specific values in the build tool.
It should not depend statically on a version of scalajs-linker. It can statically depend on a version of scalajs-linker-interface
Was this option possible in 0.6.x and the early 1.x RC or is this something new? Is the scalajs-linker-interface
missing any API/feature available in the scalajs-linker artifact? I'm totally fine doing this. I suspect we're using scalajs-linker
in Scala.js 1.0 because we were not aware of the linker interface artifact.
All of the linking infrastructure we have in place assumes we work against a linker interface (that's what we do in the Scala Native integration) and then classload it dynamically based on the project's configuration. Based on what you say, it seems this is not true for 1.x though.
It should not outsmart the build tool's actual config of scalaJSLinkerConfig and scalaJSModuleInitializers, instead it should just take the value of those two settings from the build and roll with those.
If I remember correctly, we didn't do this because it depends on fullClasspath
or it can trigger compilation while exporting the project, which it's something we've actively tried to avoid.
In general, a Scala.js codebase is entirely defined by (per sbt triple Project / Configuration / (fastOptJS and fullOptJS)):
the fullClasspath
Which brings me to the elephant in the room, which is that we can't trigger fullClasspath
and that's why we use bloopInternalDependencyClasspath
which doesn't trigger compilation while getting the full classpath.
The current Scala.JS and Scala Native integrations in sbt-bloop
have to live with this for the moment. I think it would be useful that you use a similar thing to bloopInternalDependencyClasspath
in your scalajs sbt plugin so that we don't need to do all of the magic we do today. Another option is to put in the time to fully make sbt offload compilation to bloop; when that's ready we can trigger as much downstream compilation as we want from within the exporter.
I feel your recommendations are important, but over-indexing on the "should's" and not the "why's" misses the big picture of why the current implementation works as it does.
It should not depend statically on a version of scalajs-linker. It can statically depend on a version of scalajs-linker-interface
Was this option possible in 0.6.x and the early 1.x RC or is this something new? Is the
scalajs-linker-interface
missing any API/feature available in the scalajs-linker artifact? I'm totally fine doing this. I suspect we're usingscalajs-linker
in Scala.js 1.0 because we were not aware of the linker interface artifact.
No that wasn't possible in 0.6.x. And in fact I realized since yesterday that it is not possible either in 1.x (I've opened https://github.com/scala-js/scala-js/pull/4087 for discussion upstream). So in fact the only way to make this work is to classload scalajs-linker
and all its transitive dependencies (including scalajs-linker-interface) then reflectively call a few things from there.
All of the linking infrastructure we have in place assumes we work against a linker interface (that's what we do in the Scala Native integration) and then classload it dynamically based on the project's configuration. Based on what you say, it seems this is not true for 1.x though.
Hum, I'm confused now. I must be misunderstanding what you're saying. If you are already classloading the linker artifact dynamically based on the project's configuration, then the present issue shouldn't have happened in the first place. Perhaps you mean that you have your own linker interface for Scala.js and Scala Native that you classload? But if that one is statically compiled with one version of the linker, and it's always that linker that is loaded together with it, then we are back to square one.
Perhaps if you could point me to where you are taking the Scala.js version of the project into account, it'll be easier for me to understand what bloop does now and how to best fix it to avoid the present issue to come back for every Scala.js version.
It should not outsmart the build tool's actual config of scalaJSLinkerConfig and scalaJSModuleInitializers, instead it should just take the value of those two settings from the build and roll with those.
If I remember correctly, we didn't do this because it depends on
fullClasspath
or it can trigger compilation while exporting the project, which it's something we've actively tried to avoid.
I thought exporting the projects systematically compiled everything to get the SemanticDB anyway? At least that's my experience with Metals (which is the only way I interact with bloop), where my observation is that when importing the project, it will trigger compilation of everything. Perhaps this is just because I tend to work in complex builds with source and resource generators?
If this is truly a concern, I understand the difficulty of dealing with scalaJSModuleInitializers
. However at least it shouldn't be an issue for the two other major concerns, namely scalaJSLinkerConfig
(which is a Setting) and the version of scalajs-linker.
In general, a Scala.js codebase is entirely defined by (per sbt triple Project / Configuration / (fastOptJS and fullOptJS)): the fullClasspath
Which brings me to the elephant in the room, which is that we can't trigger
fullClasspath
and that's why we usebloopInternalDependencyClasspath
which doesn't trigger compilation while getting the full classpath.
fullClasspath
is not an issue, here. I understand why bloop does what it does with the classpaths. This is fine. Or at least it's no different than for JVM projects, for which I assume that customizing fullClasspath
in sbt would also cause different results with bloop than with sbt.
I tried making a clear separation between a) all the things that define a Scala.js codebase (which includes fullClasspath
) and b) the things I think bloop should take directly from the build without trying to reconstruct them itself (which does not include fullClasspath
). I'm sorry if that wasn't clear enough.
The current Scala.JS and Scala Native integrations in
sbt-bloop
have to live with this for the moment. I think it would be useful that you use a similar thing tobloopInternalDependencyClasspath
in your scalajs sbt plugin so that we don't need to do all of the magic we do today.
The philosophy of sbt-scalajs is to mimic to the greatest extent possible the semantics of the sbt task graph, including all its zillions of classpath settings and tasks. We achieve that by loading the IR from fullClasspath
, and constructing scalaJSModuleInitializers
from mainClass
. I don't know what bloopInternalDependencyClasspath
does differently than sbt's internalDependencyClasspath
. If you can point me to the difference, I can see whether we can do something about it. But our main priority will be focused on parity with sbt's JVM handling of classpaths and main class.
I feel your recommendations are important, but over-indexing on the "should's" and not the "why's" misses the big picture of why the current implementation works as it does.
I've tried to explain the "why's" of my "should's". Perhaps you'd have preferred if I had started my bullets with the problems (the "why's") followed by the solutions (the "should's"), rather than the other way around?
Hum, I'm confused now. I must be misunderstanding what you're saying. If you are already classloading the linker artifact dynamically based on the project's configuration, then the present issue shouldn't have happened in the first place. Perhaps you mean that you have your own linker interface for Scala.js and Scala Native that you classload? But if that one is statically compiled with one version of the linker, and it's always that linker that is loaded together with it, then we are back to square one.
Yeah, agreed :+1: I tried to say that our infra allows us to do this, and for some reason, I thought that's what we're doing for Scala Native, but after double-checking now it isn't the case: we depend on a static version of the Scala Native and Scala.js linker. I would love it if this changes. Ideally, we would avoid resolving the scalajs linker artifacts ourselves and get them from the build tool.
I thought exporting the projects systematically compiled everything to get the SemanticDB anyway? At least that's my experience with Metals (which is the only way I interact with bloop), where my observation is that when importing the project, it will trigger compilation of everything. Perhaps this is just because I tend to work in complex builds with source and resource generators?
This is very infrequent in most of the builds, it only happens when people work on complex builds that have lots of sophisticated dependencies (for example, if the compilation of a project depends on running another one). Because the exporter avoids fullClasspath
and other sensitive task options that trigger compilations, most builds do not require compilation to be exported.
a) all the things that define a Scala.js codebase (which includes fullClasspath) and b) the things I think bloop should take directly from the build without trying to reconstruct them itself (which does not include fullClasspath)
Right, but just to make it clear: depending on those tasks/settings you mentioned would trigger fullClasspath
under the hood because you're using it there.
and constructing scalaJSModuleInitializers from mainClass.
IIRC, mainClass
can also trigger compilation but I'm not 100% sure about this.
I don't know what bloopInternalDependencyClasspath does differently than sbt's internalDependencyClasspath. If you can point me to the difference, I can see whether we can do something about it. But our main priority will be focused on parity with sbt's JVM handling of classpaths and main class.
bloopInternalDependencyClasspath
is a close copy of the internalDependencyClasspath
in sbt with the difference that we do not call productDirectories
but bloopProductDirectories
, as the former does trigger compilation. This was made to simulate sbt semantics completely.
I believe we could stop using our own task if sbt defined a way to get fullClasspath
declaratively without forcing compilation.
I've tried to explain the "why's" of my "should's". Perhaps you'd have preferred if I had started my bullets with the problems (the "why's") followed by the solutions (the "should's"), rather than the other way around?
Thanks for elaborating. What I meant is that sbt-bloop needs to work around some big limitations in sbt to do what we do, and that constrains how much information we can reuse from the scalajs plugin. The fact that some setting/tasks return custom types and that we're calling these tasks reflectively from within sbt-bloop is also another important limitation.
The underlying problem described in the conversation between @jvican and @sjrd is still there.
However, the particular issue with Scala.js 1.x
where x != 0
was solved here: https://github.com/scalacenter/bloop/pull/1423
Maybe this issue should change title and description? Or should we create a new one to fix the problem of the fixed Scala.js version?
@lolgab let's maybe open another issue and point to this one? Do you know the details enough to create an issue with description?
I am getting a stranger error, where my version is not even listed:
java.lang.RuntimeException: Expected compatible Scala.js version [0.6, 1], given
I am using Scala.js 1.13.1
2023.08.27 16:04:14 INFO {
"jsonrpc": "2.0",
"id": "8",
"error": {
"code": -32603,
"message": "java.lang.RuntimeException: Expected compatible Scala.js version [0.6, 1], given\n\tat scala.sys.package$.error(package.scala:30)\n\tat bloop.engine.tasks.toolchains.ScalaJsToolchain$.artifactNameFrom(ScalaJsToolchain.scala:131)\n\tat bloop.engine.tasks.TestTask$.discoverTestFrameworks(TestTask.scala:245)\n\tat bloop.engine.tasks.TestTask$.findTestNamesWithFramework(TestTask.scala:380)\n\tat bloop.bsp.BloopBspServices.$anonfun$scalaTestClasses$3(BloopBspServices.scala:580)\n\tat scala.collection.immutable.List.map(List.scala:297)\n\tat bloop.bsp.BloopBspServices.$anonfun$scalaTestClasses$1(BloopBspServices.scala:578)\n\tat bloop.bsp.BloopBspServices.$anonfun$ifInitialized$4(BloopBspServices.scala:369)\n\tat bloop.task.Task.$anonfun$runAsync$8(Task.scala:268)\n\tat monix.eval.internal.TaskRunLoop$.startFull(TaskRunLoop.scala:170)\n\tat monix.eval.internal.TaskRestartCallback.syncOnSuccess(TaskRestartCallback.scala:101)\n\tat monix.eval.internal.TaskRestartCallback.onSuccess(TaskRestartCallback.scala:74)\n\tat monix.eval.internal.TaskCreate$CallbackForCreate.run(TaskCreate.scala:252)\n\tat monix.execution.internal.Trampoline.monix$execution$internal$Trampoline$$immediateLoop(Trampoline.scala:66)\n\tat monix.execution.internal.Trampoline.startLoop(Trampoline.scala:32)\n\tat monix.execution.schedulers.TrampolineExecutionContext$JVMOptimalTrampoline.startLoop(TrampolineExecutionContext.scala:132)\n\tat monix.execution.internal.Trampoline.execute(Trampoline.scala:40)\n\tat monix.execution.schedulers.TrampolineExecutionContext.execute(TrampolineExecutionContext.scala:57)\n\tat monix.execution.schedulers.BatchingScheduler.execute(BatchingScheduler.scala:50)\n\tat monix.execution.schedulers.BatchingScheduler.execute$(BatchingScheduler.scala:47)\n\tat monix.execution.schedulers.AsyncScheduler.execute(AsyncScheduler.scala:31)\n\tat monix.execution.schedulers.StartAsyncBatchRunnable.run(StartAsyncBatchRunnable.scala:42)\n\tat java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1395)\n\tat java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)\n\tat java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)\n\tat java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)\n\tat java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)\n\tat java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)\n"
}
}
That looks like the Scala JS version is not defined in your Bloop configuration files, which is why this seems to be happening. Any idea why would export skip that? Anything complex about your build?
I have the exact same issue, in the bloop json file I have:
"platform": {
"name": "js",
"config": {
"version": "",
"mode": "debug",
"kind": "none",
"emitSourceMaps": false,
"jsdom": false,
"toolchain": [
]
},
"mainClass": [
]
},
Any idea why the configuration was produced this way? What build tool are you using in your project?
I am using sbt 1.9.3 with these plugins:
addSbtPlugin("com.disneystreaming.smithy4s" % "smithy4s-sbt-codegen" % "0.17.13")
addSbtPlugin("com.github.sbt" % "sbt-native-packager" % "1.9.16")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "2.1.1")
addSbtPlugin("com.eed3si9n" % "sbt-buildinfo" % "0.11.0")
addSbtPlugin("io.github.davidgregory084" % "sbt-tpolecat" % "0.4.4")
addSbtPlugin("org.scalameta" % "sbt-scalafmt" % "2.5.0")
addSbtPlugin("ch.epfl.scala" % "sbt-scalafix" % "0.11.0")
addSbtPlugin("com.colisweb" % "sbt-datadog" % "2.2.1")
addSbtPlugin("org.scala-js" % "sbt-scalajs" % "1.13.2")
addSbtPlugin("org.portable-scala" % "sbt-scalajs-crossproject" % "1.3.2")
addSbtPlugin("org.typelevel" %% "sbt-feral-lambda" % "0.2.3")
addSbtPlugin("ch.epfl.scala" % "sbt-scalajs-bundler" % "0.21.1")
And vscode metals:
addSbtPlugin("ch.epfl.scala" % "sbt-bloop" % "1.5.11")
It is a multimodule project where only one modules targets scalajs:
lazy val foo = (project in file("modules/foo"))
.enablePlugins(BuildInfoPlugin, ScalaJSPlugin, LambdaJSPlugin, Smithy4sCodegenPlugin)
.settings(
name := "foo",
libraryDependencies ++= List(
"org.typelevel" %%% "feral-lambda" % "0.2.3",
"org.typelevel" %%% "log4cats-core" % Versions.log4Cats,
"org.typelevel" %%% "log4cats-js-console" % Versions.log4Cats
),
Compile / npmPackageDependencies ++= Seq(
// Some stuff here
)
)
Worth noting that there isn't a root module that aggregate the others.
Experiencing same issue in multimodule project with one Scala.js module.
should be fixed in https://github.com/scalacenter/bloop/pull/2161
Bloop is currently able to compile and link Scala.js 0.6.x and 1.0.x, but not 1.1.0. It fails with the following error:
Based on the release notes of 1.1.0, I think Scala.js 1.1.0 could largely reuse the Scala.js 1.0 bridge.