IST LUNCH BUNCH
Data center networks interconnect massive server farms used to process big data for a variety of applications. In such networks, resource allocation algorithms are used to distribute computing and network resources, to competing data-processing tasks. The main objective of the algorithms is to ensure very small latencies for delay-critical applications. These algorithms operate at different time scales: at the slow time-scale of jobs, at the intermediate time-scale of flows (communication messages between parallel jobs) and at the fast time-scale of packets. In the first part of the talk, we will present an overview of the architecture of data center networks and the various resource allocation problems that arise in such networks. In the second part of the talk, we will discuss a long-standing open problem which lies at the intersection of algorithms, probability, and control theory, which has again resurfaced in the context of resource allocation in data center networks. We will present our recent solution to one version of this open problem, where we developed a new mathematical technique to understand the performance of algorithms for high-dimensional resource allocation problems.