Closed yuewuo closed 1 week ago
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
A basic performance benchmark using the sinter jobs of getting_started.ipynb
(the sinter.collect(
block of repetition_code:memory
)
By the way, I had some thoughts on the slow down of fusion_blossom
compared to pymatching
: it is probably coming from duplicate edges. The current implementation of sinter for fusion_blossom does not deduplicate the edges (and merge them) like pymatching would do internally when calling the add_edge
function. I think there could be some functionality outside of pymatching that merges the edges. Do you think this is reasonable?
Having duplicate edges could certainly slow things down. Stim already dedupes most errors so there may not be a lot of dupes to start with.
Having duplicate edges could certainly slow things down. Stim already dedupes most errors so there may not be a lot of dupes to start with.
Thanks for the insight! Indeed, just printed them out and there are no duplicate edges. I've also tried to reduce the integer weight of the edges, but the decoding time remains almost unchanged. The most probably reason would then because iterating the ndarray in Python instead of in Rust. I'll investigate in this to get a better understanding.
Having duplicate edges could certainly slow things down. Stim already dedupes most errors so there may not be a lot of dupes to start with.
Thanks for the insight! Indeed, just printed them out and there are no duplicate edges. I've also tried to reduce the integer weight of the edges, but the decoding time remains almost unchanged. The most probably reason would then because iterating the ndarray in Python instead of in Rust. I'll investigate in this to get a better understanding.
Sorry I made a stupid mistake...... I installed the debug version of the fusion_blossom
package which runs for 35s. Once I switch to a release version, it drops to 6.3s. This makes more sense. Similarly for mwpf
package. Here is the new result:
I moved this to https://github.com/quantumlib/Stim/pull/837 so I could resolve the merge conflicts.
This is a simple adaptation of the
mwpf' python package to sinter. Just copied the
_decoding_fusion.pyand modify it accordingly to let the decoder consider hyperedges. Tested using the
getting_started.ipynb` and they run decoding without a problem.I haven't added any test cases for this, since I didn't find a test case for
fusion_blossom
decoder as well. Please let me know where should I put a basic test case for the new decoders!