Closed noajshu closed 4 months ago
Looks like you need to run the
regen_docs.sh
script in thedev/
directory. It expects you to build and install the stim dev wheel before doing so (see the developer docs; this is easy to do with bazel).
Thanks, ran this and addressed all the comments.
Note: the bug in doctest_proper
is intended-behavior in doctest
, where you have to say <BLANKLINE>
instead of having blank output lines. But I prefer the modified print statements.
Hi @Strilanc , Oscar had an idea that I like for the interface and I was wondering what you thought.
The idea is to split into two python functions:
minimum_cardinality_undetectable_logical_error_problem_as_maxsat_string(format)
approximately_most_probable_undetectable_logical_error_problem_as_maxsat_string(quantization_factor, format)
The idea would be to to ignore probabilities entirely in the first method and do a "best effort" approximation in the second method. "Best effort" means negation complementation and rounding to zero where it's the closest available integer.
WDYT
I think these are close enough that they might as well be the same method with a flag of some kind.
Also those names are too long. If I was naming it I'd say shortest_logical_error_as_...
and most_likely_logical_error_as_...
.
I think these are close enough that they might as well be the same method with a flag of some kind.
Also those names are too long. If I was naming it I'd say
shortest_logical_error_as_...
andmost_likely_logical_error_as_...
.
I am happy to make either 2 methods with somewhat shorter names, as you suggest, or to keep it 1 method with a flag. Which solution do you prefer?
Go with the two name solution.
@Strilanc despite running this:
bazel build stim_dev_wheel
pip uninstall -y stim
pip install bazel-bin/stim-0.0.dev0-py3-none-any.whl
pytest src/stim/circuit/ && ./dev/regen_docs.sh
I don't see any updates to the docs. Yet, the CI job test_generated_docs_are_fresh
does not pass. Do you have any idea what could be going wrong?
The failure message i nthe CI log is
So it looks like your doctest code is invalid. The fresh-check also verifies the .pyi file runs as if it was python. You just need to make it be exactly valid (need '...' on every line including in multiline strings)
Thanks for the advice! The problem with the docstring was simply that I used:
>>> circuit = stim.Circuit("""
... X_ERROR(0.1) 0
... M 0
... OBSERVABLE_INCLUDE(0) rec[-1]
... X_ERROR(0.4) 0
... M 0
... DETECTOR rec[-1] rec[-2]
... """)
which resulted in a """ inside a multiline string that itself used """.
I replaced these with ''' and also added a check to clean_doc_string
to raise an error if there are any appearances of """ to help future fledgling Stim initiates like myself.
BTW -- it seems like some of the code in a recent commit to main was not formatted with this command you shared:
find src | grep "\.\(cc\|h\|inl\)$" | grep -Pv "crumble_data.cc|gate_data_3d_texture_data.cc" | xargs clang-format -i
you might consider running it on your branch.
Because CI doesn't enforce that the code is formatted, it often drifts a bit over time.
Cross-reference: This PR came up as an example to answer this question on QCSE.
Summary
Adds a generator that outputs a .wcnf file string in WDIMACS format. Added a few tests as well.
Demo for the surface code
Here is a demo:
Step 1: Generate the WCNF file
Step 2: Fetch a solver from the 2023 maxSAT competition
Step 3: Run the solver on the produced file
Verifying distances of several quasicyclic codes
I ran this on some circuits from this paper, with help from @oscarhiggott to generate the circuits. The log file is attached below: n72_n90_n103_quasicyclic_codes_have_expected_distances_6_8_8.txt So far, it has proven that the n=72, n=29, and n=103 circuits have the expected distance reported in the paper. I am still running solvers on the n=144 and n=288 cases. My computer got rebooted so I had to start over. I am cautiously optimistic that the solvers will finish within a few weeks.