jdwhite48 / groth-sahai-rs

A Rust library for the Groth-Sahai non-interactive witness-indistinguishable and zero-knowledge proof system
Other
12 stars 4 forks source link

More efficient matrix multiplication #12

Open jdwhite48 opened 2 years ago

jdwhite48 commented 2 years ago

Prerequisite: Issue #10 task 3 (so the code doesn't become a disaster to manage)

Currently, all matrices perform multiplication using a slight variant of the naïve O(n*m) algorithm. However, matrices used by Groth-Sahai are necessarily dense and unstructured due to nearly every element including randomized elements as terms, which may make optimizations more difficult.

jdwhite48 commented 2 years ago

I could not find an existing, well-supported Rust or Rust-integratible linear algebra library which supports group action (i.e., scalar multiplication of a group element with a distinct scalar) using custom operations over matrices, which GS needs for both proving and verifying. However, I have discovered that the nalgebra library provides efficient enough data structures and algorithms that are amenable with Arkworks:

  1. Compared to other crates, it's a relatively well-used library for Rust developers requiring linear algebra.
  2. When I had initially explored the space, I only found libraries which focused on support for field primitives like i32, u32, f32; however, I was pleasantly surprised on looking deeper into nalgebra that it supports matrices operations over any custom field, including Arkworks' Fr, Fqk. Only requires that Add, AddAssign, Mul, MulAssign are all defined, and closed over the same determinate-Sized elements at compile-time.
  1. nalgebra provides me with static (and sane...) typing for vectors and matrices: a. This saves me the headache and boilerplate of defining custom / bespoke data encodings for my matrices, as I was doing before. Hopefully it'll make data formats a bit more interoperable across libraries too. b. For large and sparse matrices of field elements (i.e. statements' pairing exponents Γ \in Fr^{m x n} for all but the most complex statements), nalgebra::sparse::CsMatrix will give a slight efficiency boost to proof and verifier computation + storage bandwidth. I suspect that field arithmetic will very rarely be the bottleneck, however.
  1. While nalgebra does NOT support scalar multiplication of G_1, G_2 elements with scalar field elements Fr (and so I must still rely on a custom scalar matrix multiplication algorithm for that), in the context of GS these will almost exclusively be: a. dense (by inclusion of random matrices R \in Fr^{m x 2} and S \in Fr^{n x 2}) b. and small (group matrices' and computed scalar matrices' rows and/or column space will always be 1 or 2 dimensions, eventually reducing down to a (2 x 2) matrix in G_T for the verifier and a select few (2 x 1) G_1, G_2 elements for the proof.)

TL;DR this satisfies my worries that the naive O(n*m) algorithm I had already implemented would be an unnecessary bottleneck if the number of variables grows large. This really only affects field operations, but those are the ones where better-than-naive would matter, and nalgebra devs are assuredly better at efficient linear algebra implementations than I am.