RustCrypto / RSA

RSA implementation in pure Rust
Apache License 2.0
536 stars 148 forks source link

Rabin-Williams signatures #118

Open burdges opened 2 years ago

burdges commented 2 years ago

Rabin-Williams signatures are RSA-like signatures with extreme verifier speed optimizations, but enough overlap with RSA exists that maybe Rabin-Williams should be done inside this repository? Thoughts?

Afaik, we've no off the shelf Rabin-Williams implementations, but.. Dan Bernstein explains the optimization options in https://cr.yp.to/sigs/rwsota-20080131.pdf especially sections 7-10. See also https://cr.yp.to/sigs/rwtight-20080201.pdf Also in 1008, Adam Langely implemented Rabin-Williams in C with 1024 bit public keys and compressed 64-byte signatures. It's based upon GMP but verifiers still run like 22 times faster than Ed25519.

It's possible Rabin-Williams' deployment might choose fixed size keys, thereby avoiding dynamic allocation in verifiers, although maybe that's kinda too extreme. I gather @tarcieri's crypto-bigint crate exists to make this possible, even for RSA though, so..

I'm think that, after this crate or some fork adopts @tarcieri's crypto-bigint crate, then we could reimplement Adam Langely's C code, attempting to reuse as much of the RSA crate as convenient. Thoughts?

cc @mmagician @drahnr

dignifiedquire commented 2 years ago
burdges commented 2 years ago

I suppose the dynamic scheme would keep the runtime key size, but separate it from a type level bound on key size?

Is there any reason to preserve a dynamic allocation support? I guess some people use this for crazy key sizes in GPG, but not sure if that really ever matters.

Afaik Rabin-Williams never requires dynamic allocation because if you're using it then you're likely after raw performance, so you'll even explore things like 1536 bit keys probably.

dignifiedquire commented 2 years ago

Is there any reason to preserve a dynamic allocation support? I guess some people use this for crazy key sizes in GPG, but not sure if that really ever matters.

I wrote this originally for https://github.com/rpgp/rpgp/ which requires the support of dynamic key sizes.

dignifiedquire commented 2 years ago

I suppose the dynamic scheme would keep the runtime key size, but separate it from a type level bound on key size?

Probably, haven't really looked into it yet.

tarcieri commented 2 years ago

I wrote this originally for https://github.com/rpgp/rpgp/ which requires the support of dynamic key sizes.

How dynamic? Is there a fixed list of supported key sizes, or are you trying to support any possibility?

dignifiedquire commented 2 years ago

I can probably get away with supporting everything up to 15360bits

tarcieri commented 2 years ago

Yikes, really? I’m reminded of this issue: https://github.com/RustCrypto/crypto-bigint/issues/28#issuecomment-956078385

It would be simple enough to have an enum with e.g. Box<U2048> and Box<U4096> variants, then choosing the smallest one that’s larger than or equal to the key size. That’d only monomorphize the code twice, and the enum would always be the same size due to Box.

However, if you need to support every key size under the sun up to 15360 bits, it sounds like you’re probably better off with the current backend.

Perhaps through judicious use of traits it would be possible to support both, with crypto-bigint for heapless use cases?

cc @elichai

tarcieri commented 2 years ago

also cc @mikelodder7 who wrote unknown_order as a point solution for this sort of general problem

newpavlov commented 2 years ago

How often do you encounter "weird" (e.g. 4224 bits) key sizes in practice? On a first glance it looks like it will be better to create a list of 8-16 popular key sizes and use the smallest fitting one for "weird" ones. API wise I think that such approach will be simpler than trying to fit stack and heap allocating backends into the same interface. And while binary bloat is unfortunate, I think performance-wise it will be a win in most cases.

burdges commented 2 years ago

I'd think a large_key_sizes feature could support https://github.com/rpgp/rpgp/ then, so everyone except gpg users gets crypto-bigint, and gpg gets num-bigint or whatever.

If you want gpg using crypto-bigint then one could expose type information bounding the key size, so that only gpg monomorphizes for 15360bits. I doubt people care much about performance when using gpg though.

burdges commented 2 years ago

If I understand @tarcieri you envision types and routines here gain a <const LIMBS: usize> and work directly with &UInt<LIMBS> and &mut UInt<LIMBS>, yes?

Any actual enum would live inside https://github.com/rpgp/rpgp/, yes?

enum DynUI {
    UI2048(pub Box<U2048>),
    UI4096(pub Box<U4096>),
    UI8192(pub Box<U8192>),
    UI16384(pub Box<U16384>),   // Just for Yawning Angel
}

You care about legacy RSA working without an allocator, I guess?

If not another option would be doing an entirely separate Rabin-Williams crate, loosely based off AGL's C code and this crate, but using crypto-bigint and removing legacy cruft. It's plausible dynamic allocation is always fast enough for RSA, so any new protocol which cares about performance should use Rabin-Williams anyways.

mmagician commented 2 years ago

It would be simple enough to have an enum with e.g. Box and Box variants

@tarcieri Isn't the whole point of crypto-bigint to avoid heap allocations? AFAIK Box is meant specifically for putting stuff on the heap. Let me know if I'm missing some key information here.

tarcieri commented 2 years ago

Yes, it's designed to support heapless operation and hopefully eventually the rsa crate can support those targets.

However, I suggested Box for this OpenPGP use case due to the requirement to support absurdly large key sizes, because if the enum were merely stack allocated then then all keys are the same size as the largest key.