web-platform-tests / wpt

Test suites for Web platform specs — including WHATWG, W3C, and others
https://web-platform-tests.org/
Other
4.88k stars 3.05k forks source link

[wdspec] add fixture to wait for several WebDriver BiDi events #44702

Open juliandescottes opened 6 months ago

juliandescottes commented 6 months ago

Spawned from https://bugzilla.mozilla.org/show_bug.cgi?id=1878842

We currently have a simple fixture to wait for 1 BiDi event called wait_for_event, but as soon as you need to wait for several events, or if you need to wait for an event with specific characteristics, we need to manually add listeners, add events to an array, poll until the array reaches a certain size etc...

We could make our tests easier to write (and read) if we had a more flexible fixture to wait for events.

The initial proposal at https://bugzilla.mozilla.org/show_bug.cgi?id=1878842 was to extend the current fixture to allow waiting for N events instead of just 1, but we could design something more flexible. We could look at the one used for CDP tests for reference: https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/web_tests/http/tests/inspector-protocol/resources/inspector-protocol-test.js;l=493-546;drc=3288ac2808b18ca1a671bdb4235684767813abfb (example usage)

We don't have to follow exactly the same pattern as the CDP one, but basically this fixture should allow the test to wait for several events, add conditions using lambda, specify how many events we should wait for etc...

juliandescottes commented 6 months ago

API suggestions

With the idea of a single fixture, we can pass the expected events as a list (because the order will matter).

on_events =  wait_for_events([expectation1, expectation2, ...])

Individual items could either be a dict:

on_events =  wait_for_events([
  { "event": "browsingContext.contextCreated", "predicate": lambda e: is_event_valid(e) },
  { "event": "network.responseCompleted", "count": 5 },
  { "event": "browsingContext.load" },
])

Or a list (slightly more compact):

on_events =  wait_for_events([
  ["browsingContext.contextCreated", 1, lambda e: is_event_valid(e)],
  ["network.responseCompleted", 5],
  ["browsingContext.load", 1], # we could omit the "count" if it's 1 and there is no lambda?
])

Either option sounds fine to me, slight preference for the first one.

Return value

About the return value, the snippets above assume each "expectation" returns its own array of events.

on_events =  wait_for_events([
  ["browsingContext.contextCreated", 1, lambda e: is_event_valid(e)],
  ["network.responseCompleted", 5],
  ["browsingContext.load", 1],
])

# do something

[context_created_events, response_completed_events, load_events] = await on_events

Other examples

Some other examples if we want to use this for simpler scenarios.

Wait for N events:

on_events =  wait_for_events([
  ["network.responseCompleted", 5],
])

# do something

[response_completed_events] = await on_events

Wait for 1 event

on_events =  wait_for_events([
  ["network.responseCompleted", 1],
])

# do something

[[response_completed_event]] = await on_events

(although we will probably keep wait_for_event around anyway)

Checking no event emitted

Currently we are also usually using an array of events when we want to check that no event is emitted. I would suggest to replace those with a simpler:

    on_event = wait_for_event(CONTEXT_CREATED_EVENT)
    # do something
    wait = AsyncPoll(bidi_session, timeout=0.5)
    with pytest.raises(TimeoutException):
        await wait_for_future_safe(on_event)

I would not try to make that part of the new fixture, seems like it would be hard to follow.

OrKoN commented 6 months ago

With the idea of a single fixture, we can pass the expected events as a list (because the order will matter).

By the order will matter, you mean that listeners will be consumed in order or that 5 responseCompleted events should come before the single load event?

juliandescottes commented 6 months ago

With the idea of a single fixture, we can pass the expected events as a list (because the order will matter).

By the order will matter, you mean that listeners will be consumed in order or that 5 responseCompleted events should come before the single load event?

The latter. I thought it would be useful to use this to enforce a strict sequence of events, and also I thought that's what the Node API did.

OrKoN commented 6 months ago

I think Node API only enforces that listeners are processed in order in which they are created. I think being more strict is for the better but I wonder how can one express if a certain set of events needs to arrive at any (relative) order with this proposal?

juliandescottes commented 6 months ago

I think Node API only enforces that listeners are processed in order in which they are created.

Ah, looks like I misunderstood the other API.

I think being more strict is for the better but I wonder how can one express if a certain set of events needs to arrive at any (relative) order with this proposal?

One way would be to call the fixture several times, eg

on_responsecompleted_events =  wait_for_events([["network.responseCompleted", 5]])
on_load_events =  wait_for_events([["browsingContext.load", 2]])

Or we could make this "strict" behavior optional, and require to pass a flag when calling the fixture to enable it?

OrKoN commented 6 months ago

What is the behavior if I listen for the same event?:

on_responsecompleted_events =  wait_for_events([["network.responseCompleted", 1]])
on_responsecompleted_events2 =  wait_for_events([["network.responseCompleted", 2]])

and say only two event are actually arriving?

juliandescottes commented 6 months ago

What is the behavior if I listen for the same event?:

on_responsecompleted_events =  wait_for_events([["network.responseCompleted", 1]])
on_responsecompleted_events2 =  wait_for_events([["network.responseCompleted", 2]])

and say only two event are actually arriving?

With what I had in mind: for 2 distinct calls both would be resolved if 2 events arrive.

OrKoN commented 6 months ago

I see, the proposed helper API sounds good to me.