figure 4: illustration of Guruswami–sudan algorithm for list decoding Rs codes. the lines are recovered as factors of a degree 5 curve
that passes through each point twice.
deg(Q) = 5
n = 10, k = 2, e = 6
through 6 points to guarantee this). Let C be the degree 5
curve that is the product of the five solution lines. As mentioned above, if we interpolate a degree 5 curve through
the 10 points, we will in general not get C as the solution.
However, notice a special property of C*—it passes through
each point twice; a priori there is no reason to expect that an
interpolated curve will have this property. The Guruswami–
Sudan idea is to insist that the interpolation stage produces
a degree 5 curve with zero of multiplicity at least 2 at each
point (i.e., the curve intersects each point twice). One can
then argue that each of the five lines must be a factor of the
curve. In fact, this will be the case for degree up to 7. This
is because the number of intersections of each of these
lines with the curve, counting multiplicities, is at least 4 ×
2 = 8 which is greater than the degree of the curve. Finally,
one can always fit a degree 7 curve passing through any 10
points twice (again by a counting argument). So by insisting
on multiplicities in the interpolation step, one can solve this
example.
In general, the interpolation stage of the Guruswami–
Sudan list decoder finds a polynomial Q(X, Y) that has a
zero of multiplicity w for some suitable integer w at each
(a , y ).d Of course this can always be accomplished with a
ii
( 1, k − 1)-degree that is a factor w larger (by simply raising the
earlier interpolation polynomial to the w’th power). The key
gain is that the required multiplicity can be achieved with
a degree only about a factor larger. The second step
remains the same, and here each correct data point counts
for w zeroes. This factor savings translates into the
improvement of r from to . See Figure 5 for a
plot of this trade-off between rate and fraction of errors, as
well as the ( 1 − R)/2 trade-off of traditional unique decoding
algorithms. Note that we now get an improvement for every
rate. Also plotted are the information-theoretic limit 1 − R,
d
We skip formalizing the notion of multiple zeroes in this description,
but this follows along standard lines and we refer the interested reader to
Guruswami and Sudan for the details.
10
and the Parvaresh–Vardy improvement for low rates that we
will discuss shortly.
Soft Decoding: We now comment on a further crucial benefit of the multiplicities idea which is relevant to potential
practical applications of list decoding. The multiplicities
can be used to encode the relative importance of different
codeword positions, using a higher multiplicity for symbols
whose value we are more confident about, and a lower multiplicity for the less reliable symbols that have lower confidence estimates. In practice, such confidence estimates
(called “soft inputs”) are available in abundance at the
input to the decoder (e.g., from demodulation of the analog
signal). This has led to a promising soft-decision decoder for
RS codes with good coding gains in practice, which was
13
adopted in the Moonbounce program to improve communication between Ham radio operators who bounce radio
signals off the moon to make long distance contacts.
3. foLDeD ReeD–soLomon coDes
We now discuss a variant of RS codes called folded Reed–
Solomon codes (henceforth folded RS codes), which will let
us approach the optimal error-correction radius of a fraction
1 − R of errors. The codewords in the folded RS code will be
in one-to-one correspondence with RS codewords. We begin
with an informal description. Consider the RS codeword corresponding to the polynomial f (X) that is evaluated at the
points x , x , . . . , x from F, as depicted by the codeword on
0 1 n− 1
top in Figure 6. The corresponding codeword in the folded RS
code (with folding parameter of m = 4) is obtained by juxtaposing together four consecutive symbols on the RS codeword as
shown at the bottom of Figure 6. In other words, we think of
the RS code as a code over a larger alphabet (of four times the
“packet size”) and of block length four times smaller. This
repackaging reduces the number of error patterns one has to
handle. For example, if we are targeting correcting errors in
up to a 1/4 fraction of the new larger symbols, then we are no
longer required to correct the error pattern corresponding
to the (pink) shaded columns in Figure 6 (whereas the same
figure 5: Rate vs. error-correction radius for Rs codes. the optimal
trade-off is also plotted, as is the Parvaresh–Vardy’s improvement
over Rs codes.
1
Information-theoretic bound
Unique decoding bound
Guruswami−Sudan
Parvaresh−Vardy
0.8
r (Fraction of Errors) --->
0.6
0.4
0.2
0
0
0.2
0.4 0.6
R (Rate) --->
0.8
1