Learning the structure of graphical models from data is a fundamentaltask in many scientific domains. I will describe analysis andapplications of learning tree-structured graphical models via awell-known learning algorithm known as the Chow-Liu algorithm (1968).The Chow-Liu algorithm learns the maximum-likelihood (ML) treestructure from independently drawn observations of a multivariatediscrete tree distribution. Using the theory of large-deviations, weanalyze the error exponent that the ML-estimate of the Markov treestructure differs from the true tree. By exploiting the treestructure, we establish that the error exponent is equal to theexponential rate of decay of a single dominant crossover event. Usingideas from Euclidean information theory, we then analyze the scenarioof ML-estimation in the very-noisy learning regime and show that theerror exponent can be approximated the signal-to-noise ratio (SNR) forlearning tree distributions. Extensions to Gaussian graphical modelsalso allow us to quantify the relative ease of certain classes oftrees. More specifically, we are able to conclude that star graphs arethe hardest" to learn, while Markov chains are the easiest" tolearn.
I will also discuss extensions to learning latent tree models, modelsin which only a subset of variables are observed. Consistent and lowcomputational and sample complexity algorithms are derived to inferthe latent tree structure without any knowledge of the number ofhidden variables. In addition, we demonstrate the applicability of ourmethods on real-world datasets by modeling the dependency structure ofmonthly stock returns in the S&P index and of the words in the 20newsgroups dataset.
Joint work with Alan Willsky (MIT), Lang Tong (Cornell), AnimaAnandkumar (UCI), Myung Jin Choi (MIT)