Combinatorial design theory has its roots in recreational mathematics and is concerned with the arrangement of the elements of a finite set into subsets, such that the collection of subsets has certain “nice” properties. In this talk we shall demonstrate that interpreting designs in the right manner yields improved solutions for distributed storage and content caching and novel impossibility results for distributed function computation.
Regenerating codes have been proposed as an efficient mechanism for dealing with the problem of reliability in large scale distributed storage systems. These systems also have additional requirements pertaining to repair. When nodes fail, the system needs to be repaired in a speedy manner by consuming as few resources (number of drives accessed, energy etc.) as possible. We will demonstrate that combinatorial designs allow us to design efficient systems that upon failure can be repaired by simply downloading packets from the surviving nodes.
Next, we will show that designs can be used to construct a family of directed acyclic networks that have several interesting properties. In particular, our work shows that the computation rate of such networks depends significantly on the source alphabets. This is in stark contrast with multiple unicast networks where the rate is independent of the source alphabet. We will conclude with an overview of the role of coding in content caching networks and our recent results on how combinatorial designs can play a central role in making coded caching more practical.
The talk will be self-contained; no background in combinatorial designs and/or network coding will be assumed.