These are dynamic actions, result of dynaconf_loader is known in advance, but skips and tools are dynamically evaluated during execution and shouldn't be requested if not needed. Due to that some section in test result would be handy that section would summarize:
what was skipped and why (this can be part of standard output it is a question whether to reuse what's already there or whether to implement something extra)
what tools were used, urls, namespace(s) and routes, whatever....
Also all dynaconf_loader automagix operations should be reported, some of those are visible in header, though it is not indicated why and how these values were chosen
services available through tools fixture need some sort of reporting to reveal what instances were used actually.
Now there is no such mechanism and tools can be revealed "by accident" from other logs of some calls/operations.
logging may be insufficient as that is printed just in case of failure.
Maybe custom section in final pytest report can be utilized...
Some actions happen "automagically"
These are dynamic actions, result of dynaconf_loader is known in advance, but skips and tools are dynamically evaluated during execution and shouldn't be requested if not needed. Due to that some section in test result would be handy that section would summarize:
Also all dynaconf_loader automagix operations should be reported, some of those are visible in header, though it is not indicated why and how these values were chosen
services available through
tools
fixture need some sort of reporting to reveal what instances were used actually. Now there is no such mechanism and tools can be revealed "by accident" from other logs of some calls/operations. logging may be insufficient as that is printed just in case of failure. Maybe custom section in final pytest report can be utilized...