Closed delucis closed 1 year ago
it may be better to start by building for example the examples/basics project and asserting it looks like we expect.
Yeah, this is most of the time what I do for my various integrations, I build an example and use it during tests to asserts various things. Altho, it does not scale very well imo, e.g. when you quickly want to test a very specific configuration and needs to adds a new example just for it.
While building my last integration, I tried to use it as an experiment to find a different approach which may be closer to some pieces that can be found in the Astro repo with fixtures. I have a fixtures
directory containing directories with only the files required for a test, e.g. a few markdown pages. I run the tests using Vitest and each tests can use a mechanism to load a fixture by its name. It creates a directory (gitignored) that will contain all the files of the current fixture and some other files that are common to all tests. The main difference comes here as contrary to the Astro repo, I do not have access to the astro/core/build
utilities which are not exported so instead I use the Astro CLI by calling npx astro build
programmatically to build the fixture and it seems to work relatively well.
At this point, depending on the type of tests needed, we can choose different approach for the tests, e.g. checking the files generated on disk, check the generated files HTML, serve the output and use playwright to check the output in a browser, etc.
An example of the code I used can be found here. It is a very naive implementation as I just needed to check if the build was successful or not in this case, but it could be improved with a better API to either build a fixture (fixture.build()
), build and preview (fixture.buildAndPreview()
), add support for custom Astro config (which I did not need in this case), implement maybe a caching mechanism to avoid rebuilding the same fixture multiple times, etc.
Any thoughts on this kind of approach?
Testing navigation and routing logic feels high priority to me as there are quite a few variable inputs and it can be hard in manual testing to remember to test them in all their combinations.
I totally agree regarding the priority of this. While working on #237 and reading some part of the code, I realized that I was not testing some combinations and having all this logic tested would be a great improvement.
We haven’t added tests to any of Starlight’s functionality just yet, but this would be extremely helpful to help us assure we don’t accidentally break features as we work.
Let’s use this issue to discuss approaches!
Some quick thoughts:
Astro components themselves are a little tricky to test. There are older tools like https://github.com/Princesseuh/astro-component-tester, but it may be better to start by building for example the
examples/basics
project and asserting it looks like we expect.We also had the suggestion at one point of using snapshot testing for things like this as a quick way to track general build output stability.
A lot of Starlight’s code depends on loading virtual modules like
astro:content
,astro:assets
, and Starlight’s ownvirtual:*
modules for user config etc. Can we make these available in a testing environment for unit testing code? Or do we need to refactor things like the navigation/routing logic to be able to pass the data that depends on virtual modules into those functions?Testing navigation and routing logic feels high priority to me as there are quite a few variable inputs and it can be hard in manual testing to remember to test them in all their combinations.