of possible solutions is polynomially bounded. (There are
some steps involved to obtain this conclusion but they are
mostly all “low-level details.”) Further, this method not only
establishes a bound on the number of solutions, but also
gives a polynomial time algorithm to find these solutions.
To see this, note that given the three new constraints, we are
2
looking for roots of the univariate polynomial R ( Y, Yd, Y d ,
X
3
Y d ), which can be accomplished by well-known polynomial
time algorithms. 1
Finally, let us return to the problem of list decoding
folded RS codes with m = 4. In folded RS codes also, we have
the property that f (X) is related to f (X) for 1 ≤ j ≤ 3. In fact,
j
combining a couple of well-known results in finite fields,
in Guruswami and Rudra9 we show that f (g X) = ( f (X))|F|− 1
mod (É(X) ), where É(X) = X|F|− 1 − g is an irreducible polynomial. We remark that the irreducible polynomial E(X) in
the Parvaresh–Vardy (henceforth, PV) codes only has some
degree requirements. Our restriction on É(X) is stricter and
thus, folded RS codes are a special case of PV codes. However, we are not done yet. Until now all we can claim is that
folded RS code of rate R with folding parameter m = 4 can be
list decoded from the same fraction of errors as the corresponding PV codes, which happens to be . We have
4R appearing instead of R because the rate of the PV code is
1/4’th the rate of the original RS code, since the encoding
of the message f (X) now consists of the evaluations of four
polynomials instead of just those of f (X). Next we expand on
our other main idea which “compresses” the PV encoding to
avoid this rate loss, and enables correcting close to a fraction
1 − R of errors.
3. 2. the final piece
The improvement over Parvaresh–Vardy comes from com-
paring apples to oranges. In particular, till now we have
seen that the folded RS code with folding parameter 4 is a
special case of the PV code of order 4. Instead let us com-
pare the folded RS code with a PV code of smaller order, say
2. It turns out that the folded RS code with folding param-
eter 4 are compressed forms of certain specific PV codes of
order 2 and we will exploit this observation. In particular,
as in Figure 7 compare the folded RS codeword with the
PV code of order 2 (where the polynomial f (X) is evaluated
at the points { 1, g , . . . , g n− 1}\{g 3, g 7, . . . , g n− 1}). We find that
in the PV encoding of f, for every 0 ≤ i ≤ n/m − 1 and every
0 < j < m − 1, f (g mi + j) appears exactly twice (once as f (g mi + j)
and another time as f (g −1g mi + j )), whereas it appears only
1
once in the folded RS encoding. In other words, the infor-
mation contained in one symbol in the folded RS codeword
(which is worth four elements from F) is repeated over three
symbols in the PV codeword (which is worth six elements
from F). This implies that even though both the folded RS
codeword and the PV codeword have exactly the same infor-
mation, the folded RS codeword is compressed by a factor
of 3/2. This in turn bumps up the rate of the folded RS code
by the same factor. Hence, we can list decode folded RS
codes with folding parameter 4 and rate R from a fraction
of errors.
Thus, our list decoding algorithm for folded RS with
folding parameter m can be modularly defined as follows:
figure 7: the correspondence between a folded Rs code (with m = 4
and x = g i) and the PV code (of order s = 2) evaluated over { 1, g , g 2, g 4,
i
. . . , g n− 4, . . . , g n− 2}. the correspondence for the first block in the folded
Rs codeword and the first three blocks in the PV codeword is shown
explicitly in the left corner of the figure.
f(x )
0
f(gx )
0
f(g x )
2
0
f(g x )
3
0
f(x ) f(x )
04
f(gx ) f(gx )
04
f(g x ) f(g x )
22
04
f(g x ) f(g x )
33
04
FRS codeword
f(x ) f(gx ) f(g 2x ) f(x ) f(gx ) f(g 2x )
00 44
04
f(gx ) f(g 2x ) f(g 3x ) f(gx ) f(g 2x ) f(g 3x )
04
00 44
PV codeword
unfold the received word for the appropriate PV code of order
s ≤ m and then run the Parvaresh–Vardy list decoder on this
unfolded received word. It turns out that this list decoder can
s
correct such a folded RS code of R from up to fraction of errors. By picking m to be (somewhat) larger than s
and picking s to be sufficiently large (in terms of 1/e), we can
conclude the following result.
Theorem 1. For every e > 0 and 0 < R < 1, there is a family
of folded RS codes that have rate at least R and which can be list
decoded up to a fraction 1 − R − e of errors in time (N/e 2)O(e log(1/R ) )
− 1
where N is the block length of the code. The alphabet size of the
code is (N/e 2)O(1/e ).
2
We note that the time complexity has an undesirable
dependence on e, with 1/e in the exponent. Improving this
bound remains a challenging open question.
4. DecoDinG oVeR smaLL aLPhaBets
So far we have discussed codes over large alphabets. For
example, folded RS codes of rates R that can be list decoded
from 1 − R − e fraction of errors need alphabet size of roughly
nO(1/e 2), where n is the block length of the code. This large
alphabet size can be a shortcoming. Next, we discuss known
techniques that help us reduce the alphabet size.
We start with perhaps the most natural small alphabet:
{0, 1}. For codes defined over this alphabet (also called
binary codes), it turns out that to list decode from r fraction
of errors the best possible rate is 1 − H(r), where H(x) = − xlog
2
x − ( 1 − x) log ( 1 − x) is the entropy function. Two remarks are
2
in order. First, the rate 1 − H(r) is much smaller than the rate
of 1 − r that folded RS codes can achieve. (It turns out that to
attain a rate of 1 − r − e, the alphabet size needs to be at least
21/e; more on this later in the section.) Second, as shown in
Shannon’s seminal paper, 16 the quantity 1 − H(r) is exactly
the same as the best possible rate (aka “capacity”) that can
be achieved in the binary symmetric channel BSC . Thus, list
r
decoding can bridge the traditionally perceived gap between
the Shannon stochastic model and the Hamming worst-case
model.
We “transfer” our result for folded RS codes to a result for
binary codes via a natural method for composing together
codes called code concatenation, proposed by Forney over