3

We're trying to simulate erasure errors on the surface code using Stim. The threshold for erasure errors on the data qubits (after initialization) is 50%.

We followed the following post: How do I perform an erasure error in stim?

However, we end up with a threshold of ~20%. This is the same as this post: Threshold value when simulating erasures with stim

That post did not get an answer and so we bring it up again here.


This is our circuit for the unrotated $d=3$ surface code

RX 0 2 4 10 12 14 20 22 24 6 8 16 18 1 3 5 7 9 11 13 15 17 19 21 23
TICK
R 999
E(0.025) X999
E(0.025) X0 X999
E(0.025) Y0 X999
E(0.025) Z0 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X2 X999
E(0.025) Y2 X999
E(0.025) Z2 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X4 X999
E(0.025) Y4 X999
E(0.025) Z4 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X10 X999
E(0.025) Y10 X999
E(0.025) Z10 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X12 X999
E(0.025) Y12 X999
E(0.025) Z12 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X14 X999
E(0.025) Y14 X999
E(0.025) Z14 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X20 X999
E(0.025) Y20 X999
E(0.025) Z20 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X22 X999
E(0.025) Y22 X999
E(0.025) Z22 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X24 X999
E(0.025) Y24 X999
E(0.025) Z24 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X6 X999
E(0.025) Y6 X999
E(0.025) Z6 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X8 X999
E(0.025) Y8 X999
E(0.025) Z8 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X16 X999
E(0.025) Y16 X999
E(0.025) Z16 X999
M 999
DETECTOR rec[-1]
R 999
E(0.025) X999
E(0.025) X18 X999
E(0.025) Y18 X999
E(0.025) Z18 X999
M 999
DETECTOR rec[-1]
CZ 6 1 8 3 16 11 18 13
CX 5 10 7 12 9 14 15 20 17 22 19 24
TICK
CZ 2 1 4 3 12 11 14 13 22 21 24 23
CX 5 6 7 8 15 16 17 18
TICK
CZ 0 1 2 3 10 11 12 13 20 21 22 23
CX 7 6 9 8 17 16 19 18
TICK
CZ 6 11 8 13 16 21 18 23
CX 5 0 7 2 9 4 15 10 17 12 19 14
TICK
MRX 1 3 5 7 9 11 13 15 17 19 21 23
DETECTOR rec[-10]
DETECTOR rec[-9]
DETECTOR rec[-8]
DETECTOR rec[-5]
DETECTOR rec[-4]
DETECTOR rec[-3]
REPEAT 2 {
    TICK
    CZ 6 1 8 3 16 11 18 13
    CX 5 10 7 12 9 14 15 20 17 22 19 24
    TICK
    CZ 2 1 4 3 12 11 14 13 22 21 24 23
    CX 5 6 7 8 15 16 17 18
    TICK
    CZ 0 1 2 3 10 11 12 13 20 21 22 23
    CX 7 6 9 8 17 16 19 18
    TICK
    CZ 6 11 8 13 16 21 18 23
    CX 5 0 7 2 9 4 15 10 17 12 19 14
    TICK
    MRX 1 3 5 7 9 11 13 15 17 19 21 23
    DETECTOR rec[-1] rec[-53]
    DETECTOR rec[-2] rec[-54]
    DETECTOR rec[-3] rec[-55]
    DETECTOR rec[-4] rec[-56]
    DETECTOR rec[-5] rec[-57]
    DETECTOR rec[-6] rec[-58]
    DETECTOR rec[-7] rec[-59]
    DETECTOR rec[-8] rec[-60]
    DETECTOR rec[-9] rec[-61]
    DETECTOR rec[-10] rec[-62]
    DETECTOR rec[-11] rec[-63]
    DETECTOR rec[-12] rec[-64]
}
MX 0 2 4 6 8 10 12 14 16 18 20 22 24
DETECTOR rec[-23] rec[-8] rec[-10] rec[-13]
DETECTOR rec[-22] rec[-7] rec[-9] rec[-10] rec[-12]
DETECTOR rec[-21] rec[-6] rec[-9] rec[-11]
DETECTOR rec[-18] rec[-3] rec[-5] rec[-8]
DETECTOR rec[-17] rec[-2] rec[-4] rec[-5] rec[-7]
DETECTOR rec[-16] rec[-1] rec[-4] rec[-6]
OBSERVABLE_INCLUDE(0) rec[-13] rec[-12] rec[-11]

Note: If we simulate our codes under pauli data noise we recover the expected thresholds so the error is not in the circuit for the codes but instead in the way we are handling erasures.


Decoding

Decoding is done using Sinter to call PyMatching, as shown in the Stim tutorial. We passed 2 new flags to the detector error model: approximate_disjoint_errors=True and ignore_decomposition_failures=True. See the code below.

import stim
import numpy as np
import pymatching
import sinter

def count_logical_errors(circuit: stim.Circuit, num_shots: int) -> int: # Sample the circuit. sampler = circuit.compile_detector_sampler() detection_events, observable_flips = sampler.sample(num_shots, separate_observables=True)

# Extract decoder configuration data from the circuit.
detector_error_model = circuit.detector_error_model(decompose_errors=True,
                                                    approximate_disjoint_errors=True,
                                                    ignore_decomposition_failures=True)
# Run the decoder.
predictions = sinter.predict_observables(
    dem=detector_error_model,
    dets=detection_events,
    decoder='pymatching',
)
# Count the mistakes.
num_errors = 0
for actual_flip, predicted_flip in zip(observable_flips, predictions):
    if not np.array_equal(actual_flip, predicted_flip):
        num_errors += 1
return num_errors

```

MystMan
  • 131
  • 3
  • How are you decoding the errors? – Craig Gidney Aug 08 '23 at 22:25
  • Take a look also on this post: https://quantumcomputing.stackexchange.com/questions/29993/circuit-based-erasure-simulation-using-stim – Yaron Jarach Aug 09 '23 at 06:42
  • @CraigGidney we updated the question to show how we decoded. – MystMan Aug 09 '23 at 13:25
  • @YaronJarach we don't see how that post helps solve the issue presented in this question – MystMan Aug 09 '23 at 13:26
  • @MystMan in the comments: "TableauSimulator and giving it operations one by one while generating and recording the erasure errors for yourself". This is the only known solution to your problem. – Yaron Jarach Aug 09 '23 at 14:01
  • @YaronJarach Do you mind detailing what is wrong with the above implementation, explaining a solution and maybe writing some pseudocode as an Answer to this question? Then people in the future with the same problem will know what to do. – MystMan Aug 09 '23 at 19:22
  • An erasure is when you attempted to measure an observable and got nothing. You can then just substitute any value for the measurement result and, in the absence of any other information, you will have a 50% chance of being correct. The weights you provide to the decoder should reflect this. – ChrisD Aug 11 '23 at 00:01

1 Answers1

0

Since Stim v1.12, you can use the HERALDED_ERASE instruction, which avoids this "use qubit 999 as an ancilla to implement the herald" stuff.

That said, the underlying problem here is that pymatching doesn't support erasures or correlated errors in general. It doesn't understand your error model, only a very loose approximation of it. You're hoping that pymatching will notice certain detectors are heralding certain error mechanisms, but actually what pymatching is doing is ignoring those mechanisms or treating them as independent.

It's kind of surprising the threshold is 20% instead of 1%, actually.

Craig Gidney
  • 36,119
  • 1
  • 29
  • 95