Closed jnicklas closed 4 years ago
Similar to the agent code, it feels like the test files are a separate unit of assembly that will need to be watched and reloaded as a developer is working on the test suite. Unlike the agent though, we won't know the full content of the test files when we publish the server.
Not only will users have many unique files that have to be bundled, but those files might have dependencies cannot be known ahead of time. Finally, what about the compilation options that a particular project may want to use? Some will want TypeScript, others JavaScript. Some will use different babel properties, and still others will use various linting options. How do we have BigTest build the test files, which are owned not by BigTest, but by the project using BigTest yet still enforce project compilation standards?
Once an assembly of the test files is created, how will it be presented to the agent, and in what JavaScript contexts (server, agent, or harness) will it need to be evaluated?
Given these questions, let's approach the problem by breaking it down into two pieces: assembly and delivery.
I think that we can use parcel
to build the test files in the project and that it will take care of most of the questions I laid out above.
package.json
inside the bigtest/
directory and the application and so the test files would have a completely isolated set of dependencies from the main application.Our options are either a string or a script tag.
Aspect | Tag | String |
---|---|---|
transport | can be easily transported via normal browser network requests. Can avail itself of all compression and browser optimizations | must be sent over the messaging protocol to the agent and evaluated with an eval() . potentially costly in terms of memory and network payload. Or complicated if implementing compression by hand |
evaluation | difficult to control evaluation environment because the only way is to pull it in via a script tag | easy to control evaluation environment because its just a string and evaluation environment is controlled by hand |
debugging/sourcemapping | handled automatically by browser since source is loaded and evaluated by a script tag from a file server | can be tricky since we may need to provide source map data urls manually. Need to do more research here |
server | requires a static file server running inside bigtestd to serve the test files | no server required |
I think using parcel for the assembly is very reasonable. There is a question as to how we are going to specify the entry points though. I see three possible options:
1) A bundle for each test file This is relatively easy to implement, and I think there shouldn't be any huge issues with this. The downside is probably performance. Also, will parcel get angry if we run a hundred instances of it in a single process? 2) One bundle with each test file being an entry point We'd need to dynamically add/remove entry points as the user adds/removes test files, and that doesn't seem to be supported by parcel without reaching into its internals in rather ugly ways. 3) One bundle with one entry point Here we'd need to dynamically generate the entry point somehow. This could be pretty good from a performance perspective, but it also sounds pretty complicated.
Personally I think we should try (1). It seems like the most straightforward implementation.
I think a long-term goal would be to extend parcel to support (2) better. This doesn't seem to be terribly difficult, we'd just need a addEntryPoint
/removeEntryPoint
API, and for that to make sure all of the internals are set up correctly and trigger a rebuild. Looking at the source, I think this is quite doable, and if we explain why we need it, maybe we can get the parcel project to accept it. I think this would probably be more performant and less hack-ish than (1).
Regarding dependencies, I'm not really sure we need a boundary between those? Can't we just use the same package.json as the main app?
I think evaluation
is the real sticking point here. As far as I can see, there is no option in Parcel to have the result of a bundle to eval to something. The only way for a bundle to interact with the outside world, so to speak, is by using a UMD bundle and setting a global. Given that, there is no real advantage to using the string
method, because we cannot take advantage of the superior eval:ing anyway. Not having to have a server doesn't seem like enough of an upside to justify the downsides.
In other words, I think we should prefer the script
tag solution.
Another option is to persue (2) for assembly, as a hacky solution until we can solve it upstream. This would certainly make it easier to set up a server for the files.
I agree with the idea to go with script tags. While we could fight the platform for some hypothetical upside of sandboxing the tests, I think every other magnetic line pulls us towards using a scripts
Here's a thought: Options (3): "one bundle with one entry point" might be the simplest thing if we think of harness as the primary actor in the system and the test suite itself as simply data which the harness can use to go about its business.
index.ts
import createSuite from './all'; // <- single entry point is a function that returns data.
window['@@bigtest/harness@@'].setSuiteConstructor(createSuite);
That way, once the harness has the entire suite, it can slice it and dice it however it wants based on communication from the agent. If it needs to run a single test, or a suite of tests, then it can do so based on the entire data structure. If it needs to report the status of a single test, then it can do that too.
Parcel appears to support [Glob paths][https://parceljs.org/module_resolution.html#glob-file-paths] which could be used to create a module which dynamically
import { Suite } from '@bigtest/suite';
import * as testModules from '~bigtest/**/*.test.{js,ts}';
export default function createSuite(...args) {
// invoke the createSuite default export on each module.
let suites = Object.values(testModules).map(module => module.default(...args)
return suites.reduce((all, suite) => all.concat(suite), Suite.empty());
}
Hmm, globs looks promising, but looking at the source I'm not too sure it supports the result of those globs changing, which is what we really need. I'll run some experiments tomorrow to verify.
Suspicion confirmed, unfortunately globs are resolved when the bundle starts, but are not recalculated later, so adding new files does not add them to the bundle.
I actually just came up with another option though, which could be interesting: we could use our own file watching, and restart the parcel process if we see any newly created files. Given parcels caching, this should be pretty fast.
thoughts on building the testifies with Parcel
A further idea that occurred to me:
What if we take your suggestion to run parcel in another process, but that process is our own worker script that:
IPC
to send from the child to the parent that the server is ready.We now have a pretty clear of how to do this and an initial implementation.
We know that test files, which are on disk somewhere will need to be executed inside the agent, and the result be communicated back to the orchestrator/server. Those test files will presumably be sent to the agent somehow. The question is how this is going to work.
Presumably there needs to be some kind of transpilation of these files so that they can be run in the browser? How do we make that work?