islet-project / islet

An on-device confidential computing platform
Apache License 2.0
91 stars 16 forks source link

Setup MIRI Test #341

Closed bitboom closed 2 months ago

bitboom commented 2 months ago

This PR sets up MIRI to analyze potential memory safety violations arising from unsafe code at the MIR level.

MIRI is an interpreter that performs analysis at the MIR (Mid-level Intermediate Representation) level. Tests must be expressible in MIR to be testable with MIRI. Consequently, we are leveraging some test cases from the ACS project, originally written in C, by rewriting them in Rust.

Advantages of the Current Approach

To establish a Rust-based testing environment, I have added a test_utils module, allowing tests to be conducted at the function level rather than running the entire codebase.

#[test]
fn rmi_features() {
  let ret = rmi::<FEATURES>(&[0]);
  assert_eq!(ret[0], SUCCESS);
  ...
}

#[test]
fn rmi_version() {
  let ret = rmi::<VERSION>(&[encode_version()]);
  assert_eq!(ret[0], SUCCESS);
  ...
}

This modular testing approach facilitates the use of various Rust verification tools, such as MIRI and cargo-fuzzer. Additionally, by adding an abstraction layer for assembly, I expect to enhance our capability to verify code using tools like Kani.

Relationship with ACS Tests

This approach does have the drawback of duplicating tests already covered by ACS. However, it provides the significant advantage of enabling Rust-based testing without the need for an ACS + FVP setup. Considering these pros and cons, the next steps will focus on selectively incorporating ACS test cases that involve unsafe code flows into our Rust environment, rather than porting all ACS test cases.

This selective approach ensures we maintain a robust testing framework while leveraging Rust's advanced tooling for safety and verification.

How to test

MIRI Test

$ ./scripts/tests/miri.sh
running 10 tests
test r#macro::test::eprintln_with_format ... ok
test r#macro::test::eprintln_without_arg ... ok
test r#macro::test::eprintln_without_format ... ok
test r#macro::test::println_with_format ... ok
test r#macro::test::println_without_arg ... ok
test r#macro::test::println_without_format ... ok
test r#macro::test::set_of_const_assert ... ok
test rmi::features::test::rmi_features ... ok
test rmi::version::test::rmi_version ... ok

Cross Test

$ ./scripts/tests/crates.sh
running 10 tests
test r#macro::test::eprintln_with_format ... ok
test r#macro::test::eprintln_without_arg ... ok
test r#macro::test::eprintln_without_format ... ok
test r#macro::test::println_with_format ... ok
test r#macro::test::println_without_arg ... ok
test r#macro::test::println_without_format ... ok
test r#macro::test::set_of_const_assert ... ok
test rmi::features::test::rmi_features ... ok
test rmi::version::test::rmi_version ... ok
hihi-wang commented 2 months ago

Just bit curious, how does it takes to analysis with miri for entire set of our project or with test snippet?

bitboom commented 2 months ago

set of our project

Currently, I've added only two very small test snippets, which take about 2 seconds to run. I guess expanding this to full set of tests should not exponentially increase.

test result: ok. 9 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 1.71s