Closed tbroyer closed 4 years ago
This works well with Bazel where the rule is to run one test class at a time, each in its own small library, but doesn't quite fit other "project-oriented" build tools.
I didn't understand this. Junit4 only supports tests and test suites and we are doing the same?
What I mean is that each java_test
has only one test_class
(which could be a JUnit suite), and you usually use macros to generate many java_test
rules in the same package with a support java_library
, whereas other more project-oriented build tools (like Maven and Gradle, but even Ant too) compile all of their src/test/java
at once and run all those tests at once also.
This is the same with j2cl_test
: a single test_class
, leading to a single generated @J2clTestInput
-annotated class that are compiled together in isolation from the other tests from the same package and the support library.
That Iterables.getOnlyElement
works well in this case, but not when you compile a whole src/test/java
at once (unless you require users to only have a single @J2clTestInput
-annotated class for the whole project (that wouldn't play well with Gradle's incremental annotation processing --we'll come to it eventually-- then, forcing recompilation of the whole src/test/java
each time a single file changes`)
Are you saying Gradle plugin need to see/process whole src/test/java from a single APT run and generate all test code in a single pass? (vs. doing that per package or per suite etc.)
And could you share little bit on how the whole mechanism in Gradle normally works for testing and what is your plan in the J2CL case?
For example, how Gradle choose which tests to run, in which grouping do they run; do they share single VM instance etc.
I'm trying to understand what should fold into plugin space vs compiler space; why our instance per "user defined" test target doesn't fit here.
For clarification, the change is not a big deal, it is just doesn't fit into our mental model; My general stance has been we should provide tools that are simple/minimal as possible. This being an APT just because it was simple to implement it as such. I would rather hide that fact from outside.
I think I may have found a better way to ask the question:
If I was giving you an executable that was generating a "JavaScript" test files from a "JUnit4" test executable source (let's say if it was a Graal image), wouldn't have that worked fine?
Or are we just trying to exploit the fact that this is implemented as an APT in order to gain much better performance in Gradle?
For a standard JVM project, there's a single task to compile src/test/java
(then a single task to process src/test/resources
), then a single task to run all the tests found. Finding the tests to run is done through several heuristics (detecting the testing framework between JUnit 3/4, JUnit Platform, or TestNG; then the tests, depending on the framework: see https://docs.gradle.org/6.5/userguide/java_testing.html#sec:test_detection) which can be overridden and/or fine-tuned through explicit configuration. Tests are then run in one or several test runner processes (each running a given number of test classes), with configurable parallelism (see https://docs.gradle.org/6.5/userguide/java_testing.html#sec:test_execution)
I haven't yet wrapped my head around how to integrate J2CL testing with Gradle, but in any case this won't be based on any existing mechanism used for JVM tests.
The J2CL Maven Plugin reimplements the whole build pipeline itself (similarly to GWT) due to limitations of Maven, which conversely gives it more flexibility. It currently expects users to annotate their test classes with @J2clTestInput
(generally pointing at the same class, e.g. @J2clTestInput(FooTest.class) public class FooTest {}
), implying that they have an explicit dependency on a JAR providing that annotation; it runs GwtIncompatibleStripper
on the sources, then javac
them, explicitly adding the J2clTestingProcessor
to the classpath –using a forked version that specifically allows it to run with more than one @J2clTestInput
-annotated class–, then processes the test_summary.json
to run the Closure Compiler for each test and then run it with WebDriver)
For Gradle, I have a task that runs GwtIncompatibleStripper
, then a JavaCompile
task (standard, from Gradle) that takes the output of the stripper as input (and where I initially thought about adding the J2clTestingProcessor
to the processor path, explicitly, by the user), then a task that runs J2CL on both the stripper output and the annotation processor generated sources, and finally I was thinking about a (single) test task processing the test_summary.json
(or possibly using naming convention based on detected test classes, by pattern-matching on *.class
files).
The big unknown is how to so that "test" task, and where to put that J2clTestingProcessor
, given that it generates:
Putting the @J2clTestInput
in the src/test/**/*.java
sources, and the J2clTestingProcessor
in the compileTestJava
makes this simpler, as we directly give the output to J2CL, like for non-test code.
Now I fully understand that in Bazel the annotation processor, and even the @J2clTestInput
annotation, are an implementation detail of a j2cl_test
rule. We could possibly keep it that way in Gradle by having the test task generate a @J2clTestInput
-annotated class, like in j2cl_test
(after pattern-matching to determine which classes are test classes, similar to Bazel's gen_j2cl_tests
), then run a javax.tools.JavaCompiler.CompilationTask
(with -proc:only
as we don't actually need to compile it) to process the generated class, then run J2CL; then finally process the test_summary.json
and run the Closure Compiler and then the tests with WebDriver.
Note that caching in Gradle works at the task level (inputs → outputs), and you cannot really generate tasks "on the fly". The above process could be split up in phases (at a minimum to generate the closure-compiled tests and then run them), and splitting further means going back to the above graph, possibly enriched further:
(here, generateTests
would be a task that generates @J2clTestInput
-annotated classes and runs the J2clTestingProcessor
on them, encapsulating the process and moving it out of the compileTestJava
; and I've split the test
class into compiling and running the tests; that last step however means we cannot parallelize closure compilation of one test with execution of another one)
We could possibly keep it that way in Gradle by having the test task generate a @J2clTestInput-annotated class, like in j2cl_test (after pattern-matching to determine which classes are test classes, similar to Bazel's gen_j2cl_tests)
Yes that was the intention! Idea is you create a J2clTestInput for every executable test unit that intended to be run together (annotation happens to be how we communicate currently) and we provide you a summary for that execution unit.
Currently externally we are lacking the "open-source tool" (potentially based on Karma) that takes the code/summary and execute the test based on that (which is essentially replacement of the JVM based Junit runner).
Since such tool is a shared piece, we should put that into a centralized place. But I don't have the bandwidth for that.
One problem to note; IIUC, in Gradle since there is no granularity; all tests will be depending on all sources or maybe even tests. That will very likey slow down test execution times even the build steps are optimized here.
One problem to note; IIUC, in Gradle since there is no granularity; all tests will be depending on all sources or maybe even tests. That will very likey slow down test execution times even the build steps are optimized here.
Except, Gradle exposes an API to get incremental changes in source files so a task can adapt its actual action :wink: This is how they do incremental compilation (only recompiling the Java file that changed, and the ones that depend on it; and deleting class files when a Java file is deleted/renamed), what I'm already doing with GwtIncompatibleStripper, and what I was intending to do with J2CL but am lacking file dependency information (#99) If we get enough dependency information out of J2CL and Closure we could probably only re-run the tests that have changed.
E.g. let's say you change a test case (FooTest.java
), the stripTestGwtIncompatible
will only reprocess that file. Because of that, only the stripped FooTest.java
file will have changed (actually, because GwtIncompatibleStripper is so fast, and Gradle uses file checksums rather than mtime anyway, we could actually just reprocess all source files), so compileTestJava
will only recompile that file, leading to (in this case) only the FooTest.class
having changed (and possibly inner classes FooTest$Bar.class
and FooTest$1.class
). generateTests
can then only re-generate one test (one test_summary.json
, the FooTest.testsuite
, and javatests/…/FooTest_Adapter.java
), transpileTestJ2cl
would only reprocess FooTest.java
and FooTest_Adapter.java
, then compileTestClosure
would only recompile one test, and test
could possibly only re-run that test.
All tasks would have run, but they'd have decided to only do minimal work.
This is indeed very different from Bazel's more basic (but very efficient!) way of only deciding whether to run targets or not. Put differently, because Bazel has so fine-grained targets, doing only "incremental build" (whether to run a target or not) is enough; and optimizing your build time can be achieved by splitting targets into smaller ones. In Gradle, "incremental build" is not enough, because tasks can be so big; so it gives tasks access to more fine-grained information about changes to the task inputs so you can have "incremental tasks".
Fwiw, I have that generateTests
task almost working. Currently my input source file is still annotated with @J2clTestInput
and I'm only calling the J2clTestingProcessor
on the compiled class, but generating a Java file instead, like in j2cl_test
, should be easy!
Currently externally we are lacking the "open-source tool" (potentially based on Karma) that takes the code/summary and execute the test based on that (which is essentially replacement of the JVM based Junit runner).
Since such tool is a shared piece, we should put that into a centralized place. But I don't have the bandwidth for that.
Fwiw, we (Maven and Gradle plugins) would probably rather rely on calling Selenium WebDriver directly in Java anyway. Maven plugin already does that (limited to Chromedriver and HtmlUnit). I'll reimplement some of that on my side in Gradle, but eventually we'll possibly share some code (if it's worth it; for now my Gradle tasks are really small, and do not rely on any change from the Vertispan forks of either J2CL or Closure Compiler, yet should work equally well with either)
wrt slowness I was referring to: at the end you will include all the JavaScript sources when bundling the test. Or am I missing something?
We can have the same granularity as jsunit_test
/j2cl_generate_jsunit_suite
, it will just be a unique Gradle task deciding what to recompile depending on what has changed (rather than Bazel's "one target per test class").
BTW, am I right (from how the Maven plugin works) that you do one compilation per JS file listed in the test_summary.json
? (rather than a unique compilation where all those files are used as entry_points)
BTW, am I right (from how the Maven plugin works) that you do one compilation per JS file listed in the test_summary.json?
It is one compilation per JS file.
We can have the same granularity as jsunit_test/j2cl_generate_jsunit_suite , it will just be a unique Gradle task deciding what to recompile depending on what has changed (rather than Bazel's "one target per test class")
I think there is still misunderstanding. I'm not asking about java recompilation. I'm talking about bundled code for test execution or closure compilation.
For example given Guava:
src/javatests/com/google/common/collect/SomeTests.java
src/javatests/com/google/common/cache/SomeTests.java
src/javatests/com/google/common/graph/SomeTests.java
src/javatests/com/google/common/math/SomeTests.java
IIUC, these tests will be all defined in the same project and collect/SomeTests.java will need to bundle all src/** in the bundle right? In the Bazel case, the target granularity provides you that information by giving you independent dep graphs. Without that information, you need to supply all the project input to development bundler or compiler to process.
The maven plugin presently runs the annotation processor first, so will get the test_summary.json
for all four files, along with each individual .testsuite
file. Each of the four .testsuite
files will have closure run on it once, no .testsuite
will be present in the bundled or compiled output of any other .testsuite
, though all of .java
files will end up in each bundle.
At least for maven/gradle (unlike bazel as I understand it), this is how traditional jvm tests would operate as well - much more fine-grained assembling the classpath. In maven/gradle it is generally assumed that anything in a given java or java test directory can reference anything else in there.
One further note on the current state of j2cl in maven: while you can produce a single large BUNDLE js output per .testsuite
, this is not actually the default (nor do we have any projects doing this, to my knowledge). Instead, the default for tests and "dev mode" is to produce a BUNDLE per java source directory, and .testsuite
files merit their own "source directory" so that each ends up in a BUNDLE of exactly one file. This means that in the example above (ignoring any dependencies other than src/java
, src/javatests
and the generate code), we end up with 6 bundles:
src/java
transpiled sourcessrc/javatests
transpiled sources, which depends on the src/java
BUNDLEcollect/SomeTests.testsuite
, which depends on the src/javatests
BUNDLEcache/SomeTests.testsuite
(etc)graph/SomeTests.testsuite
math/SomeTests.testsuite
It appears this effect could also be achieved through the --chunk
argument to closure-compiler, but implementing it this way allows us to change a test and not need to re-BUNDLE any "upstream" dependencies (as even BUNDLEing is surprisingly expensive, especially from a cold start). This comes with the obvious downside that closure-compiler must be set to dependencyMode=SORT_ONLY rather than PRUNE, so the actual runtime JS is somewhat larger than it needs to be, but for projects with many tests (gwt-dom, gwt-widget as examples), the built output is wildly smaller, on the order of tens of GB.
Your example is doing similarly to what I was referring. You need to run your test with all the source files even you divided into 3 chunks per test (suite + all sources on javatests + all source on java).
Bazel follows your depgraph from the test since the test need to define its own dependencies instead of assuming everything in the project is a dependency (so it will only collect the source of collections, collections tests). If we were following similar, Guava testing would have taken quite longer (and probably won't even have worked?).
The part I didn't understand is how did you reduce output on the order of tens of GBs by the setup you described. You included all the sources at the end?
I think there is still misunderstanding. I'm not asking about java recompilation. I'm talking about bundled code for test execution or closure compilation.
Yes, I was talking about closure compilation, and yes unless we have some information about file dependencies (e.g. #99, or something based on Closure internals to also include non-J2CL sources) we'll have to feed everything (suite + all sources on javatests + all sources on java) to the Closure Compiler and let it prune unneeded dependencies.
Generating a @J2clTestInput
-annotated JUnit suite referencing all the tests we want to process, and then processing it with J2clTestingProcessor
, all from within a generateTests
task works great.
(transpiling the generated tests works too; I'm still working on bundling and running those tests)
Filing this as a feature request as it's related to using J2CL outside of the Bazel tooling (testing isn't supported with Bazel either anyway)
Is your feature request related to a problem? Please describe.
J2clTestingProcessor
(from//junit/generator/java/com/google/j2cl/junit/apt:junit_processor
) expects a single@J2clTestInput
-annotated class per round https://github.com/google/j2cl/blob/f4420bf330be0c475dc66836c38393ceede9d143/junit/generator/java/com/google/j2cl/junit/apt/J2clTestingProcessingStep.java#L66 This works well with Bazel where the rule is to run one test class at a time, each in its own small library, but doesn't quite fit other "project-oriented" build tools.Describe the solution you'd like Process all annotated classes, not just one; as in https://github.com/Vertispan/j2cl/commit/faa85050312291ff5ee9abc0a00ebb4265003032
Describe alternatives you've considered Technically, Maven or Gradle-based (or Sbt, Buildr, whatever) tooling (another annotation processor based on JUnit 4 annotations and/or matching all classes to find JUnit 3 tests?) could generate a JUnit 4 suite class, as the sole
@J2clTestInput
-annotated class so the processor doesn't error out. Or require that only one class is annotated, expecting that it is a JUnit 4 suite in most cases.