Open Dustin-Ray opened 1 year ago
sponge_squeeze is serial by design and cannot be run in parallel.
This fix is easy and will just require some code reorganization. Check out pw_encrypt
for example:
fn pw_encrypt(&mut self, pw: &[u8], d: u64) {
let z = get_random_bytes(512);
let mut ke_ka = z.clone();
ke_ka.append(&mut pw.to_owned());
let ke_ka = kmac_xof(&mut ke_ka, &vec![], 1024, "S", d);
let ke = &mut ke_ka[..64].to_vec();
let ka = &mut ke_ka[64..].to_vec();
self.digest = Some(kmac_xof(ka, &self.msg, 512, "SKA", d));
let c = kmac_xof(ke, &vec![], (self.msg.len() * 8) as u64, "SKE", d);
xor_bytes(&mut self.msg, &c);
self.sym_nonce = Some(z);
}
Can you spot the problem? Its right here:
let c = kmac_xof(ke, &vec![], (self.msg.len() * 8) as u64, "SKE", d);
we create a key exactly the size of the message. For a small message like "Hello, Dave"
This is no problem. What if we want to encrypt an entire hard disk? This solution immediately fails because we require in memory a single key exactly the size of the message, even though we eventually XOR the key in place into the original message.
What we need to do is take advantage of the serial nature of sponge_squeeze
and XOR each squeezed block into the message, instead of creating c
on the heap.
heres an example of how we can carry this out:
fn main() {
let mut large_keystream = vec![0xAA; 1024]; // Example large keystream (initialized with 0xAA for illustration)
let block_size = 64; // Assume the block size is 64 bytes
// Simulate generating and XORing blocks
for i in 0..(large_keystream.len() / block_size) {
let block = sponge_squeeze();
let start = i * block_size;
let end = start + block_size;
xor_in_place(&mut large_keystream[start..end], &block);
}
}
This code could happen inside of any function we are modifying, but the correct solution ends up in:
This would probably be a different issue, but we need to keep track of the operation and be able to revert it in case it is interrupted or fails. This would involve:
I am trying to understand where in the sponge::{sponge_squeeze}
requires an update to add this feature.
I read the sponge::{sponge_squeeze}
and other functions relevant to creating byte_utils::{sliding_window_xor}
.
I am trying to understand where in the
sponge::{sponge_squeeze}
requires an update to add this feature.
great question, its happening right here:
pub(crate) fn sponge_squeeze<S: BitLength + ?Sized>(
s: &mut [u64; 25],
bit_length: &S,
rate: Rate,
) -> Vec<u8> {
let mut out: Vec<u8> = Vec::new(); //FIPS 202 Algorithm 8 Step 8
let block_size: usize = (rate.value / 64) as usize;
while out.len() * 8 < bit_length.bit_length() as usize {
out.append(&mut state_to_byte_array(&s[0..block_size]));
keccakf_1600(s); //FIPS 202 Algorithm 8 Step 10
}
out.truncate((bit_length.bit_length() / 8) as usize);
out
}
whats happening here is that we are passed in a bit length value which determines how many bits to output from the sponge. instead of this, we actually need to update this function a bit.
it needs to be called finalize
and instead of generating the output all at once, it simply needs to only produce one block of output at a time when it is called. it needs to save its state in order to do this correctly in the way that it does now.
fixing this issue first requires fixing #18, but that issue is a little easier and simpler than this.
current design constructs a "keystream" exactly the length of the message, then XORs the two in place. This implies that the size of the message is doubled on the heap to accommodate for the keystream. We should update this so that each time sponge_squeeze is called, encrypt/decrypt will act as a sliding window over the message. This will ensure better performance and constant memory usage on large inputs.