nmlorg / nh2

A simple HTTP/2 connection manager.
0 stars 0 forks source link

nh2.mock #1

Open nmlorg opened 3 weeks ago

nmlorg commented 3 weeks ago

Before building anything else, I want to formalize the mocking process.

I want to be able to reuse this in consumers (like ntelebot), which won't necessarily expose direct access to a Connection, etc.

I'm thinking of pulling: https://github.com/nmlorg/nh2/blob/350d703595f54a7104e75d1c27774dbaa0e4a069/nh2/connection.py#L21-L24 into a separate Connection.connect, then having the mocking system monkeypatch Connection.connect so it instantiates a mock server that's actually listening on a random port, then actually connecting the Connection to it (through a real socket), and having the server pull expected (and to-be-replied) events from a global transcript.

The general pattern, using pytest.raises as an example, might look something like:

def test_xxx():
    with nh2.mock.expect(
        ⋮  # Describe the RemoteSettingsChanged for consumption, and whatever its response is supposed to be for emission.
    ):
        conn = nh2.connection.Connection('example.com', 443)

    with nh2.mock.expect(
        ⋮  # Consume the RequestReceived, emit whatever send_headers emits.
    ):
        conn.request('GET', '/test')

  Tests would feed events into the mocking system by targeting the individual Connection instance:  host, port, and Connection instance number (to be able to test reconnection/request migration in an upcoming ConnectionManager, etc.). This could almost look like:

with nh2.mock.expect(
    'example.com', 443, 1, h2.events.RemoteSettingsChanged.from_settings(…),
):
    conn = nh2.connection.Connection('example.com', 443)

with nh2.mock.expect(
    'example.com', 443, 1, h2.events.RequestReceived(stream_id=1, headers=[(':method', 'GET'), (':path', ('/test'), (':authority', 'example.com'), (':scheme', 'https')]),
).respond(
    'example.com', 443, 1, ???,
):
    conn.request('GET', '/test')

but h2.events.RequestReceived's __init__ doesn't accept arguments like that (and there's no equivalent to h2.events.RemoteSettingsChanged.from_settings). (And frankly that syntax is really clunky.)

Adapting the transcript format from foyerbot/ntelebot/metabot might look something like:

with nh2.mock.expect("""
example.com 443 1 <<< <RemoteSettingsChanged changed_settings:{ChangedSetting(setting=SettingCodes.HEADER_TABLE_SIZE, original_value=4096, new_value=4096), ChangedSetting(setting=SettingCodes.ENABLE_PUSH, original_value=1, new_value=1), ChangedSetting(setting=SettingCodes.INITIAL_WINDOW_SIZE, original_value=65535, new_value=65535), ChangedSetting(setting=SettingCodes.MAX_FRAME_SIZE, original_value=16384, new_value=16384), ChangedSetting(setting=SettingCodes.ENABLE_CONNECT_PROTOCOL, original_value=0, new_value=0), ChangedSetting(setting=SettingCodes.MAX_CONCURRENT_STREAMS, original_value=None, new_value=100), ChangedSetting(setting=SettingCodes.MAX_HEADER_LIST_SIZE, original_value=None, new_value=65536)}>
"""):
    conn = nh2.connection.Connection('example.com', 443)

with nh2.mock.expect("""
example.com 443 1 <<< <RequestReceived stream_id:1, headers:[(b':method', b'GET'), (b':path', b'/test'), (b':authority', b'example.com'), (b':scheme', b'https')]>
example.com 443 1 <<< <StreamEnded stream_id:1>
example.com 443 1 >>> ??? send_headers(???)
"""):
    conn.request('GET', '/test')

but that RemoteSettingsChanged line is abominable, and I'm not sure how to represent the response. (It doesn't look like responses are ever actually instantiated as events in h2, methods like H2Connection.send_headers instantiate, manipulate, then serialize hyperframe.frame.HeadersFrame, etc., so the only readily available syntax describing a response is the literal send_headers method call.)

Eventually I'll want to expose just simple fully-formed requests and fully-formed responses, but testing things like: https://github.com/nmlorg/nh2/blob/350d703595f54a7104e75d1c27774dbaa0e4a069/nh2/connection.py#L115-L124 requires much lower-level control: https://github.com/nmlorg/nh2/blob/350d703595f54a7104e75d1c27774dbaa0e4a069/nh2/test_connection.py#L37-L87

nmlorg commented 3 weeks ago

For ntelebot, I created an autouse fixture that both enabled requests-mock and made it straightforward to set mock responses right on ntelebot.bot.Bot instances. To use this in consumers (like metabot), I explicitly imported that fixture from ntelebot.conftest into metabot.conftest to make it take effect.

I remember the concept of "entry points" (and the string "pytest11" is also very familiar), and I'm not sure why I didn't use that. (I don't appear to have documented anything :slightly_frowning_face:.)

Assuming there was no good reason (that I'm just forgetting), I'm currently thinking I should create another autouse fixture, but this time register it as an entry point in pyproject.toml:

[project.entry-points.pytest11]
nh2_mock = 'nh2._pytest_plugin'

Then any project that installs nh2 would get the plugin installed into itself as well (no need to import one conftest into another), and should therefore get the fixture, which should be able to universally disable all network I/O for nh2 (unless explicitly allowed by interacting with the nh2_mock fixture).

So the general pattern might look like:

def test_basic(nh2_mock):
    with nh2_mock("""
GET https://example.com/test -> 200 {"a": "b"}
"""):
        conn = nh2.connection.Connection('example.com', 443)
        assert conn.request('GET', '/test').wait().json() == {'a': 'b'}

def test_live_request(nh2_mock):
    with nh2_mock.live:
        conn = nh2.connection.Connection('httpbin.org', 443)
        assert conn.request('GET', …  # Make an actual request to httpbin.org.

def test_low_level(nh2_mock):
    with nh2_mock.lowlevel("""
example.com 443 1 <<< <RequestReceived stream_id:1, headers:[(b':method', b'GET'), (b':path', b'/test'), (b':authority', b'example.com'), (b':scheme', b'https')]>
example.com 443 1 <<< <StreamEnded stream_id:1>
example.com 443 1 >>> ??? send_headers(???)
"""):
        conn = nh2.connection.Connection('example.com', 443)
        assert conn.request('GET', '/test').wait().json() == {'a': 'b'}

To do:

  1. Come up with an idea for how to express responses in the lowlevel transcript format.
  2. Come up with some way to express stuff like:

        def _handler(request, unused_context):
            responses.append(json.loads(request.body.decode('ascii')))
            return {'ok': True, 'result': {}}
    
        self.bot.answer_inline_query.respond(json=_handler)
  3. Should I also support ntelebot-like nh2.connection.Connection('example.com', 443).respond('GET', '/test').with(json={'a': 'b'})? That would make 2 a lot simpler.