Closed junyechen1996 closed 10 months ago
I'm mainly trying to align with our implementations, but I'd welcome feedback on the texts here.
I think this is a regression. The goal of encoding the length of query type, vdaf type, and dp type is so that we can decode a variant we don't recognize. The goal was to separate parsing from protocol logic that is implementation specific.
I think this is a regression.
Thanks @junyechen1996 for pointing this out.
@wangshan you might remember we discussed this a few months ago.
@cjpatton this isn't a regression, what we had before was a DpConfig with no length so decoder may not know how to skip it. What we write here uses opaque dp_config<1..2^16-1>
which encodes the length in the variant length array already. When decoding, you either know how to decode dp_config or know how long it is by reading the first 2 bytes.
@cjpatton this isn't a regression, what we had before was a DpConfig with no length so decoder may not know how to skip it. What we write here uses
opaque dp_config<1..2^16-1>
which encodes the length in the variant length array already. When decoding, you either know how to decode dp_config or know how long it is by reading the first 2 bytes.
The regression is: There is no length prefix for VdafConfig or QueryConfig. DpConfig is fine.
Also remove length hints in TLS variants that are mandatory to configure a task.