40 years ago. Thanks to the powerful algorithms for decoding folded RS codes, we can use this approach to achieve a
certain trade-off called the Zyablov bound between rate and
fraction of errors corrected. 9 In a subsequent work, using
another generalization of code concatenation, we improved
the trade-off to the Blokh–Zyablov bound. 8 Figure 8 plots
these two trade-offs along with the best possible trade-off
(list-decoding capacity). There is a large gap between the
list-decoding capacity of binary codes and the best bound
known to be achievable with efficient algorithms. Closing
this gap remains a central and extremely challenging open
We now briefly mention how we resolve the large alphabet
issue that was raised in Section 3. When the folding parameter of the folded RS code is a constant (as in Theorem 1),
the number of bits needed to represent a symbol from the
alphabet is no larger than roughly the logarithm of the block
length of the folded RS code. This is small enough to use the
idea of code concatenation mentioned above to reduce the
alphabet. In order to maintain the optimal trade-off between
rate and fraction of errors list decoded, we need to combine
concatenation with an approach to redistribute symbols
using expander graphs. 6 This leads to codes of rate R that
can be list decoded from a fraction 1 − R − e of errors over an
alphabet of size 21/e 4, which is close to the lower bound of 21/e
5. concLuDinG RemaRKs
First, we mention some related work that appeared subsequent to the initial publication of our result. Further work
on extending the results in this article to the framework
of algebraic-geometric codes has been done in Guruswami5
and Guruswami and Patthak. 7 A surprising application of
the ideas in the Parvaresh–Vardy list decoder is the construction of randomness extractors by Guruswami, Umans,
and Vadhan. 11 Randomness extractors convert input from
a weakly random source into an almost perfectly random
string, and have been intensively studied in theoretical
computer science for over 15 years. This recent extractor
is almost optimal in all parameters, while having a simple,
self-contained description and proof.
Even though the work presented in this article makes
good progress in our theoretical understanding of list
decoding, applying these ideas into practice requires further innovation. We conclude by posing two practical
The first challenge is specific to making the list decoding algorithms for folded RS codes more practical. Recall
that the algorithm involved an interpolation step and a
“root-finding” step. There are fast heuristic approaches
for the latter step that could be used in practice. The interpolation step, however, seems too inefficient for practical
purposes due to the large size of the linear systems that
need to be solved. It would be very useful to have more efficient algorithms for this step. We note that such improvements for the Guruswami–Sudan algorithm have been
The second challenge is more general. Codes have found
numerous practical applications in domains such as communication and data storage. Despite its promise and the
recent advances, list decoding has not yet found widespread
use in practical systems (though as mentioned earlier, the
Moonbounce program does use the multiplicities based
list decoder). One possible reason could be that the previous list decoding algorithms do not provide much gain for
the high rate regime over traditional unique decoding algorithms. However, this is no longer a concern—we now have
algorithms that obtain much better theoretical bounds in
this regime. Further, folded RS codes are very similar to RS
codes that are ubiquitous in practice. Hopefully in the near
future, list decoding will be used more widely in practical
figure 8: error-correction radius r of our algorithms for binary codes
plotted against the rate R. the best possible trade-off, i.e., list decoding capacity, is r = H − 1( 1 − R), and is also plotted.
The research described here was supported in part by NSF
award CCF-0343672 and fellowships from the Sloan and
Packard Foundations. We thank Ronitt Rubinfeld for several
valuable comments on an earlier draft of this paper.
List decoding capacity
Blokh Zyablov bound
r (Fraction of Errors) --->
R (Rate) --->
1. Berlekamp, e. factoring polynomials
over large finite fields. Math.
Comput. 24 (1970), 713–735.
2. elias, P. list decoding for noisy
channels. Technical Report 335,
mit research lab of electronics,
3. gemmell, P., sudan, m. highly
resilient correctors for multivariate
Processing Lett. 43, 4 (1992),
4. goldreich, o., levin, l. a hard-core
predicate for all one-way functions.
in Proceedings of the 21st ACM
Symposium on Theory of Computing,
5. guruswami, V. artin automorphisms,
cyclotomic function fields, and
folded list-decodable codes, 2008.
1 6. guruswami, V., indyk, P. linear-time encodable/decodable codes
with near-optimal rate. IEEE Trans.
Information Theory 51, 10 (2005),
7. guruswami, V., Patthak, a.
codes: improved list decoding over
bounded alphabets. Math. Comput.
77, 261 (Jan. 2008), 447–473.
8. guruswami, V., rudra, a. Better
binary list-decodable codes via
multilevel concatenation. in
Proceedings of the 11th International
Workshop on Randomization and
Computation, 2007, 554–568.
9. guruswami, V., rudra, a. explicit
codes achieving list decoding
capacity: error-correction up to
the singleton bound. IEEE Trans.
Information Theory 54, 1 (2008),
135–150. Preliminary version in
10. guruswami, V., sudan, m. improved
decoding of reed–solomon and
algebraic-geometric codes. IEEE
Trans. Information Theory, 45
11. guruswami, V., umans, c.,