Closed jamiebuilds closed 1 year ago
The .snap.md
Markdown files that AVA generates are write-only. We could support an API for parsing the binary .snap
files. One limitation is that we currently store fixed-width hashes of test titles, so you can't get the titles back out. Tests may have multiple snapshots as well.
Values are concordance
serializations, not strings. Deserializing requires you to provide the concordance
plugins that were used in the serialization. That's not a problem at the moment because we don't let you customize plugins, but it will be in the future. I'm not clear how you'd render these deserialized values in the browser for visual diffing though.
The deserializing-using-plugins problem is surmountable, being one of configuration. The snapshot name extraction problem requires a change to the .snap
format. We could either add a reverse lookup or store the test titles directly, though that makes parsing more difficult.
In light of #1769, do you see this as an entirely separate process or as a complement to an AVA run? In which case we could surface the failing snapshots, you could render in the browser, then rerun a specific test and let AVA update the snapshot. Then we wouldn't need to change the .snap
format either.
The snapshot name extraction problem requires a change to the .snap format. We could either add a reverse lookup or store the test titles directly, though that makes parsing more difficult.
It would be really useful to have the name back, otherwise snapshots can very easily get mixed up.
do you see this as an entirely separate process or as a complement to an AVA run? In which case we could surface the failing snapshots, you could render in the browser, then rerun a specific test and let AVA update the snapshot.
I want it to be able to run in two modes:
Although if I had the "complete" event in #1769, I suppose it could be implemented as:
type Data = {
...,
failedSnapshots: Array<{
name: string,
expected: T
actual: U
}>
};
ava.on('complete', (data: Data) => {
browserWs.send(JSON.stringify(data.failedSnapshots));
});
Maybe I could even do it as individual test results come in with a separate event.
It would be really useful to have the name back, otherwise snapshots can very easily get mixed up.
Yes. Roughly, the binary format is:
AVA Snapshot v1\n
)concordance
serializationconcordance
serializationsI think we could add a trailer section:
We wouldn't need to parse this while running tests, but once we've parsed the header it's easy to know where the trailers start. We'd have to bump the snapshot file version number but we could still read v1 snapshots since none of that encoding has changed.
Could you elaborate on what kind of values you're snapshotting? Only primitives (except for symbols) can be safely round-tripped through the serialization. Object values are described, so if you're storing React.createElement()
results it'll be hard to revive them.
I was thinking it would do this:
t.snapshot(visual(<Comp/>));
// effectively:
t.snapshot({
isVisualSnapshot: true,
value: "<raw><html><string/>..."
});
I'm okay making it just a plain string though
If we're adding trailers anyway, we could add an "annotation" type trailer:
t.snapshot(visual(<Comp/>), {annotations: {visual: true}})
(You'd probably want to wrap that as visual(t, <Comp/>)
.)
The nice thing about the .snap
files is that they can contain more data than is shown in the report files.
Description
I'm trying to build some tools on top of Ava's snapshots, but there's a lot of work to read/update them correctly.
I was wondering if Ava could expose some methods for working with snapshots outside of the testing process.
I need two things:
Trying to build a tool that finds all of the React related snapshots and renders stringified HTML into a browser so the visual difference can be compared. If there is no visual difference it can update them all automatically.