d0ugal / python-rfxcom

A Python 3.3+ library for working with your RFXTrx and automating your home.
BSD 3-Clause "New" or "Revised" License
8 stars 9 forks source link

call_later should call write() not its result #12

Closed kalfa closed 10 years ago

kalfa commented 10 years ago

unit tests still passes.

This is an interesting case where testing an object behaviour is tricky. The unit test relied on the call to write() which happened, but at the wrong time.

I'm tempted to rewrite the unit tests for the transports, which are anyway quite empty :)

Having them written in a more complete way would have let you catch it earlier. I might submit eventually something on the UT side ;)

Edit: I added also an improved UT, which checks the role removing any dependancy. What's not done is checking that the calls to write() are done int the correct order, which seems to be quite important for the setup sequence, so it might be worth testing.

coveralls commented 10 years ago

Coverage Status

Coverage decreased (-0.35%) when pulling 6b16117b0241e63f148d4b47641622dfa045d409 on kalfa:master into e2563c8606759338035213d6e2d3ad1cc4200e0c on d0ugal:master.

d0ugal commented 10 years ago

Good spot, you can tell this isn't in a released version or my setup would be broken :) - so its great to find it now, thanks!

I have had a hard time figuring out an approach to testing the asyncio integration. I never found a way to mock the file descriptor interface but it looks like you were able to bypass that problem by just mocking asyncio completely. I should have thought of that!

The downside to this approach is that it removes the one full iteration of the event loop which I think was quite a nice test to have - even if I did it badly. However, I can add this in again later.

kalfa commented 10 years ago

I agree, it'd be nice, in fact I was in doubt wether to leave it and add the new test or replace it.

Fact is that what you did there wasn't really unit testing, it'd be closer to component testing. I wasn't sure what you want to cover and since the rest seems unit testing, I preferred to propose a replacement. Both are absolutely fine to have, though, but probably if you want to to have different types/classes of automatic test cases, it'd be nice to have them split into different trees on the tests/ (or somewhere else) hierarchy. Mixing them up tends to go wrong ;-)

The reason for the split is to allow fast execution of UT at commit/post-commit time. Higher test categories might take longer/more resources.

Also, they cover different things.

I test only object roles (how object communicate with each other), not behaviours (unless the function is the direct producer of a specific result). What I don't do, and might be important in the setup sequence of events, since we work on a serial line, is enforce that a specific order for the events are followed. I'm working on it ;-)

Ideally the loop is been unit tested by the Guido crew and we don't need to care nor interact with it. In literature (IIRC) it's called collaborator - as the device object - and we need to isolate collaborators in order to focus on the unit under test. We assume that the loop instance works and eventually if we use call_later() for instance, we'll be called back (in real life) as its doc says so.

We just need to ensure that we call call_later the right way. If we need to test the callback, we call it in the test directly, faking being called back. It's usually easy. Any other public (at least, possibly also the key non-public ones) method of ours will be covered by us at some point.

If you need to test loop or device with the transport, you move to component testing or system testing at the best :) That's the reason for which I was working on a StubSerialClass to have a double of the real serial.Serial instance to pass to the transport and test it nicely in at another level.

Sorry, for the long description of subjects you surely already know; it's kind of compulsive for me to have properly tested/testable code ;-)

d0ugal commented 10 years ago

Sure, I guess ideally I would like a solid set of integration tests but that's quite hard when you are using a hardware device.

Any improvements to the testing would be great, at the moment its a best effort with the time I've been able to commit to it, but I think generally they are okay.