Open FriedrichFroebel opened 1 year ago
That's good in theory but hard to do well.
Testing depends on the functionality of the plugin and "what to test" or "how to test it" will vary significantly from plugin to plugin. In my opinion, best documentation for tests is usually (and maybe unfortunately) code.
If there are things that could use some explanation... Sure, I'm happy to help. But I can't list them off the top of my head. I don't know what could trip someone, since I've been cursed with knowing too much about pelican
:). So it would (and maybe should) come from someone that went through this process: like you :).
testing correct signal handling
I don't understand what this means. Do you want to test that you're using the correct signal? I honestly don't know how you test that... Do you want to test pelican
correctly triggers your method with correct parameter with said signal? It's not a job for your plugin. If there would be a test for it, it would go inside pelican
. It is safe to assume signal connection does as it is documented. Otherwise it's a bug in pelican
(code or documentation).
I am aware that there is no general all-in-one solution for this and I might have to read about some pelican internals anyway. But I do not want to become an expert on the pelican internals itself to just use it to test some stuff.
Some examples of things I stumbled upon:
pelican.tests.support
module which provides some convenience stuff for testing (although exposing only two of the attributes publicly).testing correct signal handling
For some sort of integration test of the plugin, it might be desirable to run processing of a set of pages, articles etc. as it would be done regularly by pelican to see if my signal usage is correct and leads to the correct results (apart from manually testing this in a project which actually uses the plugin itself). I am aware that I could use an example project for this as well, but maybe there is some mechanism unknown to me which could simplify this.
Thank you, this is more concrete to expand upon. Some general thoughts...
Testing and their supporting functionality that currently exists in pelican
is there to test pelican
(naturally) and not much thought or effort went into it to provide an "API" for secondary utilities. That is reflected on the lack of documentation since it was not designed to be public facing. Also in the apparent lack of supporting functionality for certain common operations.
For some time, I wanted to overhaul the testing in pelican
be more "modular". Create articles, content, etc on the fly and use them, which would provide convenience functionality to be used in tests for pelican
adjacent projects. As well as, updating the tests to more of pytest
style. However, these things require time and I am having a shortage of that for a while.
So, I am sympathetic to the request and would happily welcome any contributions on that regard :).
There seems to be the pelican.tests.support module which provides some convenience stuff for testing (although exposing only two of the attributes publicly).
Well, they are all public in a way. If you're referring to __all__
, that's probably outdated and would need to be updated or removed. It does signal "public" in a way maybe by convention only but practically only affects *
imports.
How can I generate a set of contents objects like Page, Article, Static etc. with some dummy or custom content?
pelican
uses a sample project and the contents within it to do that. I'd suggest same path.
How to deal with settings?
That's such an open ended question and by necessity requires following it with the questions "deal how?" and "to do what?" :). Again, I'd recommend looking over the tests in pelican
. They are pretty comprehensive and I'd like to think that they have enough self-documentation to be accessible.
For some sort of integration test of the plugin
Well, that generally requires running pelican
with a sample project. That's what pelican
itself does (with the above mentioned sample project), and that's what you would need to do. You can use that sample project if it suits your needs or if those do not cover the extend of what your plugin does, you would need to roll out your own. There is nothing special there. That's the nature of writing tests mostly.
Thanks for the further explanations. As already mentioned, this would be nice to have, but I understand that it might be hard to cover the different use cases for plugins. I mostly managed to do what I want and used manual testing in an actual project to verify that everything works as intended, which is sufficient for the small plugins I wrote.
There might be an influence regarding documentation on testing from other framework like Django, although pelican serves a completely different purpose with being a static site generator "only".
Well, they are all public in a way. If you're referring to
__all__
, that's probably outdated and would need to be updated or removed. It does signal "public" in a way maybe by convention only but practically only affects*
imports.
Yes, I mean the __all__
. AFAIK IDEs like PyCharm rely on this as well to differentiate between public and private API.
Issue
I have written some small pelican plugins in the recent past. To ensure correct behaviour, I added some unit tests as well. This proved to be quite challenging nevertheless, as it basically is undocumented. I managed to get some plain unit tests to work with the help of some other plugins, reading the pelican source code and some trial-and-error. Nevertheless, I limited this to plain method-level tests as testing correct signal handling etc. would be nice, but I wanted to avoid the hassle of further trial-and-error for now.
For this reason I would like to see at least some basic guidance on how to correctly write tests for pelican plugins inside the official docs, as good test coverage generally is desirable.