Closed zombiepigdragon closed 2 years ago
Hello, thanks for addressing this issue. Test cases are currently on hold right now. As you said, a proper method needs to be implemented in order for test suites to work. I believe that the hard-coded problem you mentioned can be easily solved if we just pass a configuration to the Server
when instantiating the struct. I'm currently working on a directory feature so I will have to put this on hold until I finish implementing it and after I merge your PR and deal with the conflicts.
P.S. For the test cases we could use fake request buffers and convert them to byte slices.
@zombiepigdragon, I have made some basic unit tests and I have created a TestServer
specifically used for this purpose which you may take a look at if you wish; https://github.com/PeterPierinakos/vanilla-rustlang-server/commit/05271d6c701a4c1157df7fb63fa76a494b962b0a
Here are some of the most notable additions:
(NEW) src/enums/status.rs
:
pub enum TestStatusCode {
DirResponse,
OK,
NotFound,
BadRequest,
MethodNotAllowed,
InternalServerError,
CORSError,
}
(MODIFIED) src/structs/server.rs
:
pub struct TestServer { // ... }
impl TestServer {
fn create_fake_buffer(info: &str, headers: Vec<&str>) -> [u8; 1024] { // ... }
fn serve_fake_request(&self, logfile: &mut Option<File>, buf: &[u8; 1024]) -> TestStatusCode { // ... }
}
The basic unit tests I have implemented are inside tests/requests.rs
I'm writing this comment from the perspective of what I understand to be idiomatic- what you have is better than what you had, but I can see a bit more room to make it even better, and you could take it as far as you want!
The tests as written should be in-file unit tests, and they should be separated into properly named functions rather than simply being test_1
, test_2
, etc. Separately, there is no real reason for the TestServer
API to deal in [u8; 1024]
, so I'd expect create_fake_buffer
to instead return a Vec<u8>
(this should make that cleaner) and serve_fake_request
would take a &[u8]
buffer instead. This makes tests a little more memory-efficient, as well as removing the question of why the exact buffer size was chosen.
The code split between Server::serve_request
and TestServer::serve_fake_request
means the existing code doesn't actually test as much as it could; a somewhat non-scientific cargo llvm-cov
run tells me it's ~30% covered as is. While I certainly wouldn't expect 100%, missing the serve_request
in particular method doesn't make the results feel particularly trustworthy (actually, as a rule of thumb running coverage checks and adding tests to fill in the blanks is a good way to think of new failing test cases early on). I still believe the easiest way to handle this is to create some freestanding serve_request
, and using something like std::io::Cursor to allow yourself to read/write from an input/output buffer that you can test from. This also means you could test the whole response contents, and not simply the response code.
Rather than simple unit tests (which definitely have a place!), the tests in /tests
should be end-to-end, connecting over TCP to a socket, and comparing the response. If nowhere else, this is the place to introduce dependencies (they can be just for tests), because it means that if you happen to misinterpret spec, the tests would catch the error as fast as possible. IMO, ureq is the best choice here, but it's your call. I also would recommend something like goldenfile to track evolution in response types, although this doesn't need to actually use the dependency- reading the expected response from a file and comparing it to the received response is easy enough to implement, and it shouldn't take much more to allow for automatically updating the files when e.g. a new header is added.
In the original post, I suggested fuzz testing. I'm still happy to introduce that ASAP if you don't intend to change the structure at all, but otherwise I'll PR it after any reorganization.
I also considered introducing dependencies to the project but I didn't know I could introduce dev-only ones.
I'm still happy to introduce that ASAP if you don't intend to change the structure at all
Yeah, I've finished adding most of the features I wanted to add to the server so I will let you proceed forward. Just submit a PR and I will review it, I am assigning the issue to you since you seem willing to further improve the tests. I think you could completely remove the TestServer
and replace it with fuzz testing. I have approved your other PR BTW.
I really don't think we should keep fake_serve_request
because it introduces too much unnecessary boilerplate to the server.
@zombiepigdragon, just to be sure could you confirm that you are taking this issue?
Yes, I am working on making changes as described in my last comment.
What is the feature you want to improve or implement? When looking running an informal audit of this codebase, I noticed there are zero test cases at all. Considering VRS attempts to parse (even a subset of) HTTP on its own, I find this extremely surprising. Ideally, there should be a decent range of request-response pairs, as well as some form of fuzzing to catch parsing errors.
Additional information: As currently structured, I don't think the implementation of the server is particularly testable. It uses a hardcoded address+port (as well as requiring root/admin to run), and there is no other way for code to send the core of the server a request. Reorganizing the server to have some form of
fn serve_request(configuration: Configuration, input: impl Read, output: impl Write)
(note that this signature is infallible- errors should be written to the output here [HTTP status code] and/or a configured logger) would drastically increase the testability of the server and build confidence in its correctness.If a refactor that introduces these features occurs, I'm willing to help build a test suite to run on them.