Closed quantumgizmos closed 2 years ago
Defining an element in ring of circulants like this:
f = RingOfCirculantsF2([1,1])
seems a bit strange to me for the field F2. What do you think? I could change the init function, but this would slow things down.
I still like using the set operation for addition, because λ(1,2) + λ(1) = λ(2) seems more intuitive to me (and more memory efficient)
In the previous version of the protograph class the input to creating a ring element was a set, such that duplicate elements could not be put in
My concern is that it doesn't match the behavior of the matrix representation of these ring elements. This is why I initially just defined addition as a concatenation of the lists. Is concatenation significantly slower than the set operation?
An alternative would be to implement some sort of simplification routine that counts whether there are an odd or even number of repetitions of the same coefficient and deletes accordingly.
I would also add that it is odd to define an element like
RingOfCirculantsF2([1,1])
However, it's conceivable that such an object might arise as part of some numpy subroutine we might want to apply. Eg., numpy.kron may end up in situations where such a ring element arises.
Ok, we could change again to appending lists. Can you make the changes?
numpy kron uses the multiplication defined here so that situation does not occur. (I thought about this a bit, there are tests in the QEC package)
the speed difference was not in the difference between appending lists and the set operation but with multiplying, instead of multiplying [1, 1, 1, 1, 1] x [2] it should multiply [1] x [2].
See
dev
branchtest_ring_of_circulants_f2.py
.The following test is failing: