0

Suppose I have a single block $n$-qubit stabilizer code that can correct a weight 1 error (so the distance is $d=3$). If I apply a $1$-transversal gate of the form $U = U_1 \otimes U_2 \otimes \cdots \otimes U_n$, then if there was 1 error before I applied the gate, there will still only be 1 error after I apply the gate. So $U$ is called fault tolerant because error correction works before and after applying $U$.

On the other hand, consider a $2$-transversal gate of the form $V = V_1 \otimes V_2 \otimes \cdots \otimes V_N$ where each $V_i$ acts on either 1 or 2 qubits, for example $\mathrm{CNOT} \otimes X \otimes Y$. Then if there is 1 error before we apply $V$, there could be 2 errors on the output. This means $V$ is not fault-tolerant (as usually defined) because we could correct the error before we applied $V$ but we could not after we applied $V$.

However, if I simply choose a code with a larger distance, say $d=5$ (which can correct 2 errors or less) then it seems like $V$ should be considered "fault-tolerant". Because if a weight 1 error happens and we apply $V$ then the output will have a maximum of 2 errors. And because the code can correct 2 errors we can still undo this correctly. However, the "fault-tolerant distance" of the code has changed - we can only tolerate a single error (if 2 errors happen then after we apply $V$ there could be 4 errors). So even though the code has $d=5$, the "fault-tolerant distance" is $d_{FT}=3$.

Is my understanding correct? I really have never seen this mentioned anywhere but it seems fairly obvious (unless I am missing something)? Reference?

Eric Kubischta
  • 862
  • 2
  • 11

1 Answers1

0

I think there is no common agreement on what $U$ being fault tolerant exactly means. It was initially defined as "the quantum [system] can function successfully even if errors occur during the error correction".

To some people it will mean "$U$ does not spread errors within one code block" (see this answer for example quoting this paper).

A more or less equivalent formulation could be "A correctable amount of errors before is the operation is always mapped to a correctable amount of errors after the operation". The slight difference being that error spreadings can happen if they always cancel out (e.g. thanks to stabilizers). As an example, I would argue that stabilizer measurements can spread errors towards the code. Consequently, they are considered fault tolerant only if the measurement schedule is chosen to avoid the so-called hook errors.

An even broader definition could be "A correctable set of errors before the operation is mapped to a correctable set of errors after the operation". Indeed, error spreading induces correlation which could be used by a decoder to correct errors despite some of them having spread.

As the answer I quoted points out, fault tolerance is a property of an operation (usually defined over a class of codes with arbitrary large distance). I do not think the fault tolerant property of your $V$ would depend on whether it is applied on a $d=3$ or $d\geq 5$ code.

Your notion of "fault tolerant distance" want to grasp how badly an operation impact the code performance. I believe it is close to the minimal-weight error mechanism that induces a logical error in a circuit experiment i.e. the minimal-weight error in the circuit space-time decoding graph.

You can probably say that a 2-transversal operation is fault-tolerant if you show that you are able to decode any circuit using it without hindering (or at least without hindering too much) the overall distance of the computation.

AG47
  • 143
  • 7