Closed matklad closed 4 years ago
I am warming up to this idea, here's the API I came up with:
use std::{
env,
path::{Path, PathBuf},
};
#[derive(Debug)]
pub struct Expectation {
pub file: &'static str,
pub line: u32,
pub data: &'static str,
}
impl Expectation {
pub fn assert_eq(&self, actual: &str) {
if self.data != actual {
let path = dbg!(project_root()).join(self.file);
let text = std::fs::read_to_string(&path).unwrap();
let expect_start = self.line as usize;
let expect_len = text
.lines()
.skip(expect_start + 1)
.take_while(|it| !it.contains(r##""#]"##))
.count();
for l in text.lines().skip(expect_start).take(expect_len) {
println!("{}", l)
}
}
}
}
fn project_root() -> PathBuf {
Path::new(
&env::var("CARGO_MANIFEST_DIR").unwrap_or_else(|_| env!("CARGO_MANIFEST_DIR").to_owned()),
)
.ancestors()
.nth(2)
.unwrap()
.to_path_buf()
}
#[macro_export]
macro_rules! expect {
[$(,)?$data:literal] => {
$crate::Expectation {
file: $crate::__rt::file!(), line: $crate::__rt::line!(), data: $data
}
};
}
#[test]
fn foo() {
let expectation = expect![,r#"
64..84 '{ ...1 }; }': ()
70..81 'SS { x: 1 }': S<u32>
78..79 '1': u32
"#];
eprintln!("{:?}", expectation);
}
#[doc(hidden)]
pub mod __rt {
pub use std::{file, line};
}
#[test]
fn nested_module_resolution() {
check(
r"
//- /lib.rs
mod n1;
//- /n1.rs
mod n2;
//- /n1/n2.rs
struct X;
",
expect![,r#"
crate
n1: t
crate::n1
n2: t
crate::n1::n2
X: t v
"#],
);
}
That api wil never mark a test as failed.
Yeah, there should be assetion failure there as well :)
The bigger missing piece is that this doesn't (yet) show a :bulb: in the editor which changes the data in-place (one of my main motivations for this).
Not sure how to approach that though -- my current plan is to dump expect failures to target/expect_db.json
(intentionally limiting to only one test binary at a time) and than just read that file in our code-action handler.
It would be sweet if I can just print something to stdout which would be applicable in VS Code, but I don't thing that that is possible?
Can you use a problem matcher to match the test run output?
I think that's not possible unfortunately :-(
Wouldn't it be nicer to implement that support for insta, so other users can also profit from it?
In general, in this case I don't really understand the motivation to create a replacement; I think it'd be better to improve insta instead.
In general, in this case I don't really understand the motivation to create a replacement;
So, the super-big picture here is that we are going to have an infinite number of tests eventually, and so something tailored for our use-case would be better economically. If you are going to use a library for n
use-cases, and n
is large enough, it's less friction to write a specialized library than to use a generic one.
Medium level picture is that I think there's a different set of tradoffs we can try, in particular:
snapshot!()
macro which returns a data structure, and leaving the actual asserts to the callerinsta
is a pretty big chunk in our Cargo.lock)These feels sufficiently different from insta that I feel that it makes sense to try this out separately, and then do a synthesis, rather than to try to change insta in-place.
experimenting with tighter integration with the editor
In particular, I really want this, but I don't see how this can be implemented in a nice way without hard-coding it into rust-analyzer, and, for hard-coding, NIH thing seems easier :)
experimenting with tighter integration with the editor
That would be useful for other users of insta
too.
not requiring an external tool
If the insta
crate exported a function that would behave as if cargo insta
was called, you could add an xtask that calls that function. Then using something like cargo xtask insta
would work instead of having to install cargo-insta
yourself.
cutting the number of features/dependencies (insta is a pretty big chunk in our Cargo.lock)
Feature flags? Could be useful for other users too.
If the insta crate exported a function that would behave as if cargo insta was called, you could add an xtask that calls that function.
Yeah, but cargo-insta
is significantly more heavy-weight than insta
library (which isn't a feather either), but xtask
is one of the things which benefits from being lean.
I think we could do without the diff and colors if we update the snapshots on disk (because you can run git diff
instead).
I think it'd be pretty annoying to have to update snapshots to compare them, especially if the failure happens in CI.
I think it'd be pretty annoying to have to update snapshots to compare them, especially if the failure happens in CI.
I'm thinking that cargo xtask insta
or cargo test
would work like insta
today and fail the tests, but update the snapshots at the same time. So you run cargo test
, it fails, you run git diff
, then commit.
So, I think I've figured a way to avoid any kind of disk persistence and IO.
On test failure, the error message on stdout will contain file name, line number and expected/actual output (the whole lines, including the expect!
call, so they should be directly copy-pastable from CI failure into the output). The error location should be in a parsable format, to populate jump list in the editor.
If UPDATE_EXPECTATIONS
env var is set, each test failure updates the file in place (there's global per-process mutex to coordinate this, tests identified)
When cursor is on the expect!
macro, IDE provides update single / update all intention, which just runs cargo with the env var set.
hey @matklad, i was hacking on a snapshot testing library yesterday that i think does almost what you're describing. I actually used parts of rust-analyzer to parse original source files and find the assertion macro location.
updating multiple assertions in the same file is a bit challenging. global mutex wont' solve it, cause after every update lines in the file shift and the original line!()
column!()
values don't point to the same spot anymore, but it should be solvable by some smart offsetting algorithm 🙂
That's a space i worked in for a while (outside of Rust tho) and would be happy to help hacking on any of that if i can.
he's what i got so far:
updating multiple assertions in the same file is a bit challenging. global mutex wont' solve it, cause after every update lines in the file shift and the original line!() column!() values don't point to the same spot anymore, but it should be solvable by some smart offsetting algorithm 🙂
Apply changes in reverse source order. Applying a change can only shift the position of tokens after the change.
Apply changes in reverse source order. Applying a change can only shift the position of tokens after the change.
But you need to gather the failing ones before doing that.
Found another syntax to convince rustfmt to keep raw strings in place:
expect![[r#"
63..93 '{ ... }': ()
73..86 'Self { x: 1 }': S<u32>
83..84 '1': u32
"#]]
I actually used parts of rust-analyzer to parse original source files and find the assertion macro location.
This is something we unfortunately have to avoid. There's a lot of things in rust-analyzer I'd love to use rust-analyzer for, but that means that we'll need to setup bootstrapping for development, which doubles compile times. So, we'd have to get by with approximate "parsing".
Apply changes in reverse source order. Applying a change can only shift the position of tokens after the change.
unfortunately tests run in parallel in a random order, i don't think there's a way to change that
This is something we unfortunately have to avoid. There's a lot of things in rust-analyzer I'd love to use rust-analyzer for, but that means that we'll need to setup bootstrapping for development, which doubles compile times. So, we'd have to get by with approximate "parsing".
is there a lightweight parser i can use? the only thing i need to get is the location of that inline snapshot string literal when given line and column of the macro
assert_matches_inline_snapshot!(something, "snapshot it serializes to");
^ ^^^^^^^^^^^^^^^^^^^^^^^^^
| find this
Given this
i can technically just search for the right characters in the string, but that will probably break half of the time :)
i can technically just search for the right characters in the string, but that will probably break half of the time :)
Maybe it wouldn't, as we control the fixtures in RA.
is there a lightweight parser i can use? the only thing i need to get is the location of that inline snapshot string literal when given line and column of the macro
I think the semi-proper way to do this would be to depend on rustc_lexer
crate to correctly split text into tokens, and then just to a light-weight token matching.
assert_matches_inline_snapshot!(something, "snapshot it serializes to"); ^ ^^^^^^^^^^^^^^^^^^^^^^^^^ | find this Given this
One thing I don't enjoy about insta is that its snapshots and assertions are coupled to the single macro. If I want to abstract over insta, I have to write a macro. For example, ty
tests look like this:
#[test]
fn self_in_struct_lit() {
assert_snapshot!(infer(
r#"
//- /main.rs
struct S<T> { x: T }
impl S<u32> {
fn foo() {
Self { x: 1 };
}
}
"#,
), @r###"
49..79 '{ ... }': ()
59..72 'Self { x: 1 }': S<u32>
69..70 '1': u32
"###);
}
there isn't convenient way to abstract assert_snapshot!(infer
bit. That's why I'd like to hide just the snapshot into the macro, that would allow me to write this test as
#[test]
fn self_in_struct_lit() {
check_types(
r#"
//- /main.rs
struct S<T> { x: T }
impl S<u32> {
fn foo() {
Self { x: 1 };
}
}
"#,
expect![[r#"
49..79 '{ ... }': ()
59..72 'Self { x: 1 }': S<u32>
69..70 '1': u32
"#]],
);
}
That is, I'll be able to write a function which abstracts over testing routine. Incidentally, this should make extracting the text of snapshot much easier.
So far, we've been using insta for snapshot testing, and it is super useful, but not perfect. Some of the paper cuts I see:
cargo install insta
to fix snapshotsI am thinking that maybe less is more, and we should roll our own snapshot testing thing? I think getting rid of the coloring and using a simpler assert_eq_text style diff would be more convenient. The biggest problem is updating inline snapshots, but I think that shouldn't be too hard to implement.