icy-arctic-fox / spectator

Feature-rich testing framework for Crystal inspired by RSpec.
https://gitlab.com/arctic-fox/spectator
MIT License
103 stars 5 forks source link

Capturing and reusing STDOUT / STDERR of the test subject #9

Open UrsaDK opened 4 years ago

UrsaDK commented 4 years ago

Hi there,

First of all, thank you so much for this project! I was never a great fun of the built in should syntax or the way it mutates test objects, so it was awesome to find an option of using expect(…).to …. The fact that it comes with all the familiar rspec-like goodies (eg: subject, described_class, let, mocks & doubles, spies) is the icing on top of the cake! 👍

The only shortcoming I found so far is that there is no way to capture STDOUT / STDERR of the app that is being tested. This is not much of an issue when dealing with web apps, but it does become a problem when testing CLI apps:

  1. the output of the test run becomes mixed-in with the output of the app
  2. it becomes difficult to validate the output of the app, eg: to ensure that all errors start_with("Error: ")

I've built a simple "Hello World" to demonstrate the problem. If you checkout the project and run make tests you'll see that the output produced by Spectator is:

$ crystal spec --progress --order rand
=> STDOUT Hello World                               
=> STDERR Hello World
.

Finished in 38 microseconds
1 examples, 0 failures, 0 errors, 0 pending

Randomized with seed 13940

Obviously, lines 2 & 3 of this output come from my hello-world app and not from Spectator itself.

Would it be possible to update Spectator to support a configuration that allows us to redirect STOUT of the test to one custom IO stream, and STDERR to another? Something like:

Spectator.configure do |config|
  config.capture_stdout
  config.capture_stderr
end

Spectator would then create a custom IO stream during a test run. This streams would capture the output of the app during the test. I guess it would be best if they were initialised anew for each test and destroyed when the test is done. This would allow us to test stdout and stderr outputs of our scripts using existing matchers, for example:

subject { MyApp.show_error_with_help }
it { expect(subject.stdout).to start_with("Usage: ") }
it { expect(subject.stderr).to start_with("Error: ") }
icy-arctic-fox commented 4 years ago

Hello!

I'm glad that you found this useful and hopefully it continues to satisfy your needs.

Capturing output has been a planned feature for a while now. It's listed in the documentation as unimplemented for over a year now. At the time, I didn't know how to capture or redirect output in Crystal. Since then, I think I have found a solution. It would use a technique similar to the stdio shard.

Spectator tries to mimic RSpec syntax and behavior. The usage for capturing and testing STDOUT and STDERR would follow the RSpec docs.

That aside, I've found it annoying when output from tests gets jumbled in with the output for the test results. So it might be worth it to have a configuration option to capture or swallow output while tests are running.

I can't guarantee I'll get to this right away, but I'll start looking into it.

UrsaDK commented 4 years ago

Capturing output has been a planned feature for a while now. […] It would use a technique similar to the stdio shard.

I was going to mention stdio shard as an example in my original post but the post was getting a bit on a lengthy side, so I decided to hold it back for the comments. 😄

Spectator tries to mimic RSpec syntax and behavior. The usage for capturing and testing STDOUT and STDERR would follow the RSpec docs.

Couldn't agree more! Compatibility with Rspec was one of the things that attracted me to Spectator in the first place, so maintaining that compatibility would be highly desirable and is a really good idea. 👍 In fact, I should have thought of it myself but I so rarely write CLI apps in ruby that it didn't even cross my mind to check if Rspec already does something similar to what I'm asking.

That aside, I've found it annoying when output from tests gets jumbled in with the output for the test results. So it might be worth it to have a configuration option to capture or swallow output while tests are running.

Once again, I couldn't agree more!

I can't guarantee I'll get to this right away, but I'll start looking into it.

Thank you! Much appreciated indeed!