Open jernst opened 5 months ago
What's your current thinking about associating capabilities with tests? What do you think about me initially creating something similar to my activitypub-testsuite capabilities metadata support and associating the capabilities with test (steps) using a decorator? I can then modify the framework to check that a server supports the required capabilities before running a test. In other words, the test would be automatically "disabled".
My framework also supports server-specific per-test metadata for information like expected HTTP status codes (where they are allowed to vary), reasons for failure, and so on. This information would be passed to a test step and parts of it might be included in the test results (like reason for skipping / expected failure, if the failure identifies a bug, or an underspecified requirement, etc.).
Where should I look the find the list of capabilities you found? Is it basically the union of what's mentioned in */*/config.toml
?
Are those referring to "sending" or "receiving" capabilities? It seems to me something like s2s.inbox.post.Like
has two aspects to it:
A given application could have both, or only either one, or none of those capabilities, right?
Also, where should I look for what you are saying about HTTP status codes?
Where should I look the find the list of capabilities you found? Is it basically the union of what's mentioned in
*/*/config.toml
?
Yes, those were the ones that were useful to me for testing my server and the handful of other servers I tested.
Are those referring to "sending" or "receiving" capabilities? It seems to me something like
s2s.inbox.post.Like
has two aspects to it:
- the ability for an application to send a Like;
- the ability for an application to receive and process and understand a Like.
A given application could have both, or only either one, or none of those capabilities, right?
My test suite is primarily focused on ActivityPub requirements with a few additions for services like WebFinger and Nodeinfo. AP describes side-effects for receiving a Like in the inbox (S2S) and outbox (C2S). AFAICT, there's no explicit requirement for a server to send a Like (other than general delivery requirements, mentioned below).
What would trigger an application to send a Like? One possibility is the delivery side-effect of a C2S POST to the outbox or forwarding from the inbox. There are tests for those behaviors, but they are not specific to a Like activity. Other possibilities include an event from a user interface (clicking an icon) or an automated service (bot). Testing the latter two scenarios was outside the scope of my tests.
If we need capabilities related to publishing, we could define entries for s2s.publish.Like
, for example. Depending on the sophistication of the feditest tests, it may need to be s2s.publish.Like.Note
, etc., since a server will typically not support liking all AS2 objects types.
Also, where should I look for what you are saying about HTTP status codes?
The apex-aptesting
project config.toml
has some. I mostly I tried to define reasonable defaults in the test code, including reasonable sets of possible codes (e.g., either 401 or 403).
# apex_aptesting/config.toml
[test_actor_blocking]
status_code = 200
Moving this back into the backlog. We should pull it back out when we (re-)discover the need.
We need to capture which capabilities an application actually supports, so we don't try to test things that the application does not claim to do.
This relates to FEP-9fde: Mechanism for servers to expose supported operations.
Perhaps we could implement something like that as proxy objects for those applications that don't provide this metadata (currently all of them?) but that then could be removed if/when support shows up in applications.