Closed pscadding closed 8 years ago
After making a complete and utter mess at trying to rebase the previous fork, I gave up and accepted failure, (was doing that to fix my incorrectly authored commits.) So unfortunately I have re-forked and committed again so we loose the conversation of the previous pull request.
Adding in some of the comments from last time by @mottosso:
Linking to the corresponding Forum thread.
http://forums.pyblish.com/t/pluginprocessed-callback/264 About the change, could you make sure your commits are correctly authored to you? The username and email you use on GitHub must correspond to the one used to make the commits, otherwise they will become anonymous as you can see above.
Secondly, I don't think you need all of the keyword arguments, but instead only the result=, since it will in turn contain reference to the corresponding plugin, context and instance.
Thirdly, could you also make sure this applies to implicit plug-ins as well? You are welcome to put this signal in the main process() function if you wish.
Finally, there will need to be one or more tests to exercise all of what you expect of the feature.
About updating the API docs, you are most welcome to do so.
I have addressed points 1 - 3. Still need to create tests and update docs.
Hi Marcus I've only just gotten round to writing the test for this feature. I've basically copied and modified one of the tests in test_events. I wasn't sure how to test my tests. I tried running the "run_testsuite.py" first and then the "run_coverage.py". Both however failed at my test (see bellow). I take from that error that the pluginProcessed never fired or that it didn't manage to call the function. Not sure why though. I have committed and pushed my test so that you can see what I have done. Any thoughts on whats going wrong here.
FAIL: pluginProcessed is emitted upon a plugin being processed, regardless of its success
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Users\Philips\Documents\source_code\pyblish-base\tests\test_events.py", line 53, in test_plugin_processed_event
assert count["#"] == 1, count
AssertionError: {'#': 0}
----------------------------------------------------------------------
thanks Phil
Hey @pscadding, no problem, good to see you're back at it.
To run tests, you're doing it just right.
$ # ensure pyblish-base is available on your PYTHONPATH
$ python run_testsuite.py
And about the failure, it looks like your test isn't actually installing any plug-ins for util.publish()
to do. Have a look at surrounding tests that to a register_plugin
right before running util.publish()
.
Ah yeah, cool I wondered how it was working. Well makes sense, I've updated it will a couple of dummy plugins to test it with. It now seems to test correctly and with out failure. However I'm still a little confused why the two tests above mine (test_validated_event and test_published_event) don't register and plugins. Why do they not require and registered plugins?
I still need to update the documentation.
Why do they not require and registered plugins?
If you have a look at the function these tests are running, you'll see that they aren't emitting signals from a plug-in, but rather from within the function call itself.
Great work! Looks like we might be able to merge this soon!
Documentation updated: https://github.com/pyblish/apidocs/pull/2 Let me know If I need to do any more, or change anything.
and Thank for you for you help and patience.
they aren't emitting signals from a plug-in, but rather from within the function call itself.
Ah of course. Yeah makes sense that those two events wouldn't require that. However I can't find an example of a test for pluginFailed, which would presumably require a plugin to be provided.
However I can't find an example of a test for pluginFailed, which would presumably require a plugin to be provided.
Good catch, I think you've found a missing test. We use coverage.py to get an overview of what isn't yet tested. It isn't perfect, and it looks like this one slipped through the cracks.
Here you can see where it appears to be run at least 3 times during the tests, but it doesn't necessarily mean there is a test for this one line of code.
You can run this locally as well and get a webpage generated with a similar report. It tells you which lines of code where run during tests and which weren't. This is the run_coverage.py
script.
You are welcome to add a test for this signal as well, otherwise an issue so we can keep track of it.
Thanks for the sharp eyes!
AppVeyor failed, but turns out it was a fluke on their end, not ours. I re-ran the build and it succeeded.
This looks good! If you're happy, I'm happy to merge this now.
Let me know if there's anything else you would like to get in.
I added a pluginFailed test last night, although it doesn't currently verify all the arg types its being passed. If your happy enough with how it is then, yeah I'm happy for it to be merged. There's nothing else I want add to it.
Thank you for the code. Its updated now with those new assertions.
Thanks @pscadding!
Great work @pscadding! Awesome to see a new contributor:)
Added a callback that gets emited everytime a plugin gets processed, regardless of the result.
added to plugin.process function