# Dissertation on Fountain Codes under Maximum Likelihood Decoding

On May 9, 2017, Francisco Lázaro Blaso successfully defended his Ph.D. thesis *Fountain Codes under Maximum Likelihood Decoding*. The Ph.D. examination committee was formed by Prof. Christian Schuster (Institute of Electromagnetic Theory) and the two reviewers Prof. Gerhard Bauch and Prof. Amin Shokrollahi (Kandou Bus and École Polytechnique Fédérale de Lausanne, Swiss).

*Abstract*

Consider a scenario in which we want to deliver a data file to a large number of receivers over a network were data packet get lost (erased) with some (low) probability, this is fore example the case in multimedia streaming over the internet. The traditional solution to this problem consists chunking the data file into packets and transmit them to all receivers, and letting receivers request the retransmission of lost packets. This approach was long known to be highly inefficient / impractical.

Fountain codes were designed in order to deliver information efficiently to a large number of receivers. One can think of a fountain code using the metaphor of a water fountain generating water drops and a person holding a glass to collect enough water to fill the glass. Assume we use a fountain code to transmit a data file which is chunked into k packets. In order to receive the data file a receiver simply needs to collect m=k+δ output packets from the fountain code, with δ very small. The key property of a fountain code is that its output packets are statistically identical, like water drops, and yet carry all different information.

The first practical fountain code construction were LT codes. These codes presented some scalability issues that were fixed with the introduction of Raptor codes that can be considered as an evolution of LT codes. LT and Raptor codes have been well studied in the setting of iterative decoding, a fast suboptimal decoding algorithm that shows a good performance when the number of input packets ** k** grows very large. However, often,

**cannot grow too large because of latency and memory constraints. In this setting, iterative decoding shows a very poor performance, and as a matter of fact the fountain codes used in IETF, DVB and 3GPP standards do not use iterative decoding, but inactivation decoding, an efficient maximum likelihood (ML) decoding. ML decoding is optimal in solving the systems of equations resulting from fountain codes.**

*k*Fountain codes under ML decoding show an excellent performance. However, it is still not well understood how their performance can be predicted without resulting to simulations. As a consequence, it is also difficult to design fountain codes for this setting. The goal of this research project is analyzing the complexity and probability of decoding failure of fountain codes under ML decoding.

Different ML decoding algorithm exists, which exhibit a different decoding complexity. In this research project we focus on inactivation decoding. Inactivation decoding is in fact very similar to iterative decoding. Iterative decoding is a suboptimal method to solve systems of linear equations which only solves single equations with exactly one unknown and propagates the solution to the remaining set of equations. Inactivation decoding uses the same principle, however, whenever no equation with one unknown are present instead of declaring a decoding failure, some unknowns are declared to be inactive and decoding is resumed. At the very end, the unknowns declared inactive need to be solved by Gaussian elimination. This yields an ML decoding algorithm, with a complexity not much higher than iterative decoding.

In this research project, we have analyzed inactivation decoding of LT and Raptor codes to obtain the number of unknowns than have to be declared inactive in order to complete decoding.

In contrast to the complexity, the performance of a fountain code (the probability of decoding failure) under ML decoding is a property of the fountain code and does not depend on the particular decoding algorithm used (as long as it is maximum likelihood, ML). In this project we have also analyzed the performance of Raptor codes under ML. In contrast to some to all other results in literature, we have been able to derive upper bounds to the performance of Raptor codes that are generic, i.e., they apply to any Raptor code and not only to a specific construction.

The analysis carried out opens the door to the design of fountain codes with a good complexity performance tradeoff.