Open jlaine opened 8 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 100.00%. Comparing base (
9bc1e43
) to head (7fffb5a
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Initial testing suggest a 10x slowdown of packet protection compared to the current C code.
I haven't tried measuring it yet. My prior experience is that cryptography is pretty fast, and I'm wondering if the header protection part might be the slow thing. If it were, it might be possible to do the crypto with cryptography and then do the masking with a tiny bit of C code.
I haven't tried measuring it yet. My prior experience is that cryptography is pretty fast, and I'm wondering if the header protection part might be the slow thing. If it were, it might be possible to do the crypto with cryptography and then do the masking with a tiny bit of C code.
AFAICT it's the actual crypto code unfortunately that is slower.
Can you say how you are benchmarking? I'm guessing I'm doing something wrong, but I'm testing by using the example client to pull 20MiB from nginx, and while it is slower it's not too bad, only around 1.074 times the non-cryptography version.
Regular aioquic (main from your branch with python-crypto):
2024-02-27 08:04:13,412 INFO client Response received for GET /foo.bin : 20971520 bytes in 0.4 s (428.916 Mbps)
2024-02-27 08:04:13,414 INFO quic [1da750bf1e625c3a] Connection close sent (code 0x100, reason )
python examples/http3_client.py https://localhost:8080/foo.bin 0.46s user 0.11s system 84% cpu 0.674 total
Your python-crypto branch:
024-02-27 08:12:30,026 INFO client Response received for GET /foo.bin : 20971520 bytes in 0.4 s (389.185 Mbps)
2024-02-27 08:12:30,028 INFO quic [254cea9a35deb3ff] Connection close sent (code 0x100, reason )
python examples/http3_client.py https://localhost:8080/foo.bin 0.41s user 0.07s system 66% cpu 0.724 total
Here is what I use:
import binascii
import time
from aioquic.quic.crypto import CryptoPair
from tests.test_crypto import (
LONG_CLIENT_PACKET_NUMBER,
LONG_CLIENT_PLAIN_HEADER,
LONG_CLIENT_PLAIN_PAYLOAD,
PROTOCOL_VERSION,
)
pair = CryptoPair()
pair.setup_initial(
cid=binascii.unhexlify("8394c8f03e515708"),
is_client=True,
version=PROTOCOL_VERSION,
)
start = time.time()
for i in range(100000):
pair.encrypt_packet(
LONG_CLIENT_PLAIN_HEADER,
LONG_CLIENT_PLAIN_PAYLOAD,
LONG_CLIENT_PACKET_NUMBER,
)
elapsed = time.time() - start
print(elapsed)
A typical run for me puts the C code at about 70ms and the Python code at around 800ms.
Notes:
build
directory when switching to the python-crypto branch, otherwise you will still have the C version in the form of a .so
:)main
branch of cryptography may perform differently as more code has been moved to rust.Yeah, I discovered the need to purge while I was benchmarking. The timings I reported are after making sure I was testing in clean setups.
And of course both of our testing could be accurate, as you're testing just the crypto whereas I'm testing what the overall effect in typical use is.
I wish cryptography
43 were already out, preferably featuring:
https://github.com/pyca/cryptography/pull/10437
As things currently stand, the build of the C code has broken again through no change of our own:
This simplifies the build process and avoids needing to ship a vendor'd OpenSSL.