Closed bittner closed 9 years ago
Hey there! I left a few comments on the code. Overall, I think it's okay. I think monkey patching is the way to go for behave-django
(since it's very obvious in what it does and only affects a small portion of behave
)
Most of the comments I left goes into code complexity. I feel like there's a lot of code. Maybe a bit too much code for that much functionality. I'm just concerned about our code's readability. If we can keep it simpler, that'd be best. Though, I'd like to know why you went the direction you did with some of these things.
Also, the tests seem to be failing. Did we get a regression?
I added a couple more comments, but I think everything is in order. Once we have tests I think we can merge this in. The next release would probably be 0.2.0 because of the API change, but that's cool.
Wow, the mysterious non-issue #14 makes the build fail for Python 2.7 + Django 1.4 here, a new combination. :astonished:
That's not a new combination. Django 1.4 and 1.6 are both mysteriously failing at both Python 2.6 and 2.7. :expressionless:
@mixxorz Do you have an idea how to test the --dry-run feature?
If you want to test if something happened, you have to use mocks.
I think our whole testing system is flawed in that we're calling python manage.py behave
. We should be using something like call_command
so that we can mock things inside our management command. I'll look into it more.
Here's some code that works
def test_dont_create_db_with_option_dryrun(self):
with patch.multiple('behave_django.management.commands.behave',
ExistingDatabaseTestRunner=DEFAULT,
behave_main=DEFAULT) as values:
django_test_runner_mock = values['ExistingDatabaseTestRunner']
behave_main_mock = values['behave_main']
behave_main_mock.return_value = 0
# Setup Django so we can use `call_command`
os.environ.setdefault('DJANGO_SETTINGS_MODULE',
'test_project.settings')
django.setup()
call_command('behave', dry_run=True)
# Assert that ExistingDatabaseTestRunner gets called
django_test_runner_mock.assert_called_with()
It uses the mock
library. I don't use mocks a lot so the code might not be up to par, but it works.
I think for versions older than 1.7, setup is done via django.core.management.setup_environ
Seems to work fine without. With Django > 1.6, I need to set the environment variable before the patching takes place, though. That's why I've moved the related code outside of the with-statement.
I'll refactor the test code using decorators to make the code more readable. I'll have to move setting the environment variable to setUp()
then. That may potentially have side effects because it always overrides the settings module. We'll see when I commit.
Okay, that's probably it. Though I must confess I don't get why I have to pass the arguments for one object to the other mock object. That's totally contradictory. :anguished:
Also, we're now only testing whether the ExistingDatabaseTestRunner
is being used. Apart from testing whether the other test runner is not being used shouldn't we test whether a) really no test database is being created, and b) tests against an existing database (e.g. a prepared Sqlite DB) actually execute successfully?
I don't understand your first concern...
For the second one, we have to make some assumptions about Django. Django calls the runner's setup_databases()
method to setup the test database, correct? In that case, if setup_databases()
did nothing, as in ExistingDatabaseTestRunner
, then nothing will happen. All we have to do now is to test that the correct runner is used for a couple of instances (--dry-run
and --use-existing-database
). There's no easy way to test if a database is created or not. This is as close as we're going to get.
For point B, we could use a test for this. All we have to do is to make sure that setup_databases
is being called. We don't actually care if the database is created. All we care about is that we told the runner to setup the databases. This is because the setup_databases
method of StaticLiveServerTestCase
is probably already tested by the Django mainline tests. We don't need to worry about it.
The tests for behave-django
in general could be better. For example, testing that our command should return behave's return code. If we wanted to make a better test, we could mock behave_main()
, have it return an arbitrary number, mock sys.exit()
and check if it's called with the number we set for behave_main()
. By doing this, we don't test if behave
returns an appropriate exit status, we just test that we're correctly handling whatever the exit status may be.
In any case. Great job on this PR! I'll merge it in now. I'll make a new topic branch to rewrite our tests.
I think the flaw is in the assumption that setup_database()
actually does nothing. What if someone changes or removes the implementation of that method? The test won't detect it.
My idea is there should be a test that runs the whole set of tests (or a representative subset) against an existing database predefined in the test project settings. Is that a broken idea?
Anyway, thanks for merging and the awesome collaboration! :smiley:
Yep! The test won't detect that. But there are a lot of things classic tests won't be able to detect either. It's really really unavoidable.
Last question: Will you release v0.2.0 now?
I was about to, but then I realized that maybe we can make it better. We can make a before_scenario()
and after_scenario()
function in environment.py that does nothing but print a deprecation notice. I'll do that in a bit, then I might release it tomorrow. (24hrs)
Ahh, that's awesome, you're right! You think about the behave-django user base, veeery nice.
I can prepare that for you if you want.
This commit is crazy: Apart from adding features it changes the API of behave-django.
environment.py
is needed anymore. behave-django integrates itself into behave by help of a monkey patch.environment.py
torunner.py
, which provides two runners, a regular one and one that does not create a test database.I still need to provide tests: (let's call it "code review" as long as they're not provided)
Fixes #11. Fixes #27.