ApexAI / apex_rostest

Framework for ROS2 Integration Testing
5 stars 6 forks source link

Expand test scope to the launch fixture #20

Open hidmic opened 5 years ago

hidmic commented 5 years ago

Feature request

Background

Currently, tests perform assertions against launched processes only but indirectly, through process info and output handlers using process names as keys. This is useful if the launch fixture is being loaded from another file, though error prone. Moreover, launch does not support unique name-based referencing.

Description

Put actions and launch context in scope for tests to perform assertions with them. Process info, output and exit codes can be retrieved from them directly (through helper functions or custom assertions to minimize the coupling with launch.)

Implementation considerations

It's much more straightforward to implement if #19 is available.

pbaughman commented 5 years ago

@hidmic It's already possible to use launch actions as keys. Take a look at the good_proc.test.py example. We can pass the launch action for the 'good_proc' process to assertSequentialStdout

This PR made all of the assertions we provide take launch actions OR process names as keys.

That may not fully address your issue though. Can you give me an example of something else you want to access that's not available to the test?

I'm also not super thrilled about how you have to scope the dut_process to make this work. There's certainly room for improvement here.

hidmic commented 5 years ago

It's already possible to use launch actions as keys.

Yes! I've seen that. But to use that variant action instances have to be pushed to a global scope. It'd be much cleaner to bound them to the fixture and make them somehow accessible for a test that runs with that fixture.

Can you give me an example of something else you want to access that's not available to the test?

Off the top of my head: test that some events are being sent in the right order, test that launch reacts to those events in a given way, etc. To enable more launch level testing. ROS 2 launch opens the door for complex, reactive launch logic, something that roslaunch did not have, and it'd be nice to be able to test that as well.

pbaughman commented 5 years ago

@hidmic It's probably not sufficient to give the launch description to tests, right? Since some actions may be hard to discover if they're nested and only launched as a result of other actions.

Making things accessible to the tests is easy.. More spit-balling here, but what if generate_test_description could return a tuple of (LaunchDescription, Dictionary) where the dictionary is {property_name: object} and everything in the dictionary would get made available to the test methods?

We'd have to make sure that property_name didn't collide with a method or a field that's already part of the test, but that shouldn't be too bad.

Follow-up Edit

It would look something like this:

def generate_test_description(ready_fn):
    proc1 = launch.actions.ExecuteProcess(. . .)
    proc2 = launch.actions.ExecuteProcess(. . .)
    checker_process = launch.actions.ExecuteProcess(. . .)

    ld = launch.actions.LaunchDescription([
        proc1,
        proc2,
        checker_process,
        . . . .
    ])

    return (
        ld,
        {'checker': checker_process}
    )

# Then down in the tests. . .

def test_checker_is_ok(self):
    assertInStdout(self.checker, "success")
pbaughman commented 5 years ago

Upon closer reflection - you said "LaunchContext" which I've failed to address above. I think that reducing the scope of launch actions and making them available to the test without putting them at the module level is still a worthy feature, but I don't think it addresses your use case.

pbaughman commented 5 years ago

I think this issue has been at least partially addressed here: https://github.com/ApexAI/apex_rostest/pull/23 but @hidmic you can have final say on what to do with this issue (open/closed) - I don't think I've covered your use case exactly

dejanpan commented 5 years ago

@hidmic post an update of what has been done vs what is missing.

hidmic commented 5 years ago

Alright, to summarize:

We can defer further improvements till we have concrete use cases for the latter.