Mbed-TLS / mbedtls

An open source, portable, easy to use, readable and flexible TLS library, and reference implementation of the PSA Cryptography API. Releases are on a varying cadence, typically around 3 - 6 months between releases.
https://www.trustedfirmware.org/projects/mbed-tls/
Other
5.2k stars 2.55k forks source link

Upstream integration of MPS into SSL context #4332

Open hanno-becker opened 3 years ago

hanno-becker commented 3 years ago

Once all MPS components have been upstreamed, this issue tracks the integration of MPS into the SSL context:

Details: It's my understanding that while we aim to eventually use MPS throughout, including for TLS <= 1.2, we'll initially have a version of the library which supports TLS 1.3 on the basis of MPS and TLS <= 1.2 on the basis of the old messaging layer. In this latter situation, we need to find a way to have MPS and the legacy messaging layer coexist with minimal waste of resources. In a build which enables just TLS <= 1.2 or just TLS 1.3, this is simple, but in a dual-build, we may need to switch between legacy messaging layer and MPS based on whether TLS 1.3 is negotiated, and in this case, both messaging layers need to be present at compile-time.

There are multiple levels of sophistication here:

  1. In the interest of moving gradually, we may want to deliberately start with a version of the library where, if MBEDTLS_SSL_PROTO_TLS1_3_EXPERIMENTAL is enabled, both MPS and the legacy messaging layer are setup unconditionally, including their respective I/O buffers, even though this leads to a very high RAM waste.
  2. Based on that, we can move to a version which can switch between legacy messaging layer and MPS dynamically, avoiding double-allocation of I/O buffers or even structures (by placing legacy messaging layer and MPS in a union, for example).

The purpose of this issue is solely to establish 1., accepting the RAM waste if MBEDTLS_SSL_PROTO_TLS1_3_EXPERIMENTAL is enabled, while solving 2. is left for a separate stage.

cc @mpg @yanesca I wonder if you have thoughts on this strategy.

mpg commented 3 years ago

Generally speaking, I'm a big fan of incremental plans. In this instance, having a step where dual-builds waste RAM (even a large amount, I assume 32kB by default) is acceptable IMO, as it allows people to start testing on non-constrained platforms (and highly-constrained platforms can use mono builds). So in addition to being convenient for us, it's a way to deliver earlier something that people can use for testing, which is always good.

yanesca commented 3 years ago

I agree, moving gradually is the only way we have a reasonable chance in succeeding. Also, I think that without having a dual-build that wastes RAM as an intermediary step we wouldn't save much time and would add a considerable amount of risk.

From the user perspective this makes dual-builds available much earlier and wasting some RAM in this period is clearly better than not having dual-build at all.