llimllib / limbo

A simple, clean, easy to modify Slack chatbot
MIT License
402 stars 160 forks source link

Tests should run with or without the data from vcr's fixtures #123

Open topher200 opened 6 years ago

topher200 commented 6 years ago

vcr seems like a great tool for rapid and offline testing! However, it seems to be masking changes to the underlying APIs that plugins are depending on. If I clear out its fixtures, many tests start failing for me. Here's my test results:

test/test_cmd.py ..
test/test_limbo.py ......................
test/test_plugins/test_banner.py ..
test/test_plugins/test_calc.py ..
test/test_plugins/test_commit.py F
test/test_plugins/test_dog.py .
test/test_plugins/test_flip.py ....
test/test_plugins/test_gif.py F.
test/test_plugins/test_github.py F
test/test_plugins/test_google.py .....
test/test_plugins/test_help.py ....
test/test_plugins/test_image.py F.
test/test_plugins/test_map.py .
test/test_plugins/test_stock.py F..F.
test/test_plugins/test_stockphoto.py ..
test/test_plugins/test_tag.py F
test/test_plugins/test_weather.py FF
test/test_plugins/test_wiki.py F.
test/test_plugins/test_youtube.py F.

(my stockphoto.py isn't failing because of #122)


Steps to reproduce:

  1. Run make test ✅ tests pass!
  2. Run rm test/fixtures/*.yaml
  3. Run make test :x: observe many failing tests
topher200 commented 6 years ago

Obviously we should fix the failing tests.

Second part of the issue is that the fixtures are masking problems with the tests and plugins. I thought about adding an option to the Makefile to clear out the fixtures, but the more I thought about it... should they be committed at all? If we don't commit the output from vcr (and we add it to .gitignore instead), TravisCI will test with real APIs each run.

llimllib commented 6 years ago

the fixtures are there to avoid testing with real APIs with each run; ideally a test run would not hit the network at all. (Right now I think a couple of the tests do and I've been too lazy to fix them)

topher200 commented 6 years ago

Having each test run NOT hit the real APIs is awesome and a great feature for this project. vcr is a really elegant solution to the problem - I'd never heard of it before and I've been loving it for local dev/testing.

I didn't realize until now that some of the tests were written to check specifically against the output that the fixture happened to save. Take test_commit.py for example. If I remove the fixture and run pytest -s test/test_plugins/test_commit.py, I get this error:

    def test_commit():
      with vcr.use_cassette('test/fixtures/commit.yaml'):
        ret = on_message({"text": u"!commit"}, None)
>       assert 'stuff' in ret
E       AssertionError: assert 'stuff' in 'derp, helper method rename\n'

test/test_plugins/test_commit.py:15: AssertionError

I feel like that's selling the tests short! There's nothing wrong with the plugin, the API, or the test... the test should pass. If it doesn't, that means that we can never use our plugin tests to check against changes to external APIs. Issues like #122 may stay hidden forever.


Could a good balance be...

If yes, I'll make the necessary modifications to the current tests.

llimllib commented 6 years ago

Let's take the commit plugin test as an example; if we don't use a fixture to specify the return value of the commit API, what can we test? Just that the website returned a 200? How do we know that the plugin worked correctly?

topher200 commented 6 years ago

Great discussion question!

I'd do this:

diff --git a/test/test_plugins/test_commit.py b/test/test_plugins/test_commit.py
index 9e44733..a5e725f 100644
--- a/test/test_plugins/test_commit.py
+++ b/test/test_plugins/test_commit.py
@@ -2,6 +2,7 @@
 import os
 import sys

+import six
 import vcr

 DIR = os.path.dirname(os.path.realpath(__file__))
@@ -12,4 +13,5 @@ from commit import on_message
 def test_commit():
   with vcr.use_cassette('test/fixtures/commit.yaml'):
     ret = on_message({"text": u"!commit"}, None)
-    assert 'stuff' in ret
+    assert isinstance(ret, six.string_types), ret
+    assert len(ret) > 0

That code answers the questions

  1. did the plugin correctly match on our input text?
  2. did the plugin return some kind of text response?
  3. does that text response have something in it?
topher200 commented 6 years ago

To ask the question differently, what is the expected path to find issues like #122? How do we know when an API has changed?

llimllib commented 6 years ago

OK, that's a good question, and one I'd thought about but not deeply enough yet.

I think the right answer is to have a separate "network" test suite. So, if you run make test, you get all the non-network tests, but if you run make test-network you get the network tests. If you run make test-all, you get both.

I'm currently not convinced that travis should run the network tests, just due to the fact that they'll be much slower than the non-network tests.

How does this solution strike you?

topher200 commented 6 years ago

That solution sounds great! I'll work on it when I get the chance.