Past Events
Event Status
Scheduled
Nov. 18, 2015, All Day
Abstract: The Intensive Care Unit (ICU) is playing an expanding role in acute hospital care, but the value of many treatments and interventions in the ICU is unproven, and high-quality data supporting or discouraging specific practices are sparse. Much prior work in clinical modeling has focused on building discriminating models to detect specific coded outcomes (e.g., hospital mortality) under specific settings, or understanding the predictive value of various types of clinical information without taking interventions into account.
Event Status
Scheduled
Nov. 16, 2015, All Day
Abstract: Ubiquitous sensors generate prohibitively large data sets. Large volumes of such data are nowadays generated by a variety of applications such as imaging platforms and mobile devices, surveillance cameras, social networks, power networks, to list a few. In this era of data deluge, it is of paramount importance to gather only the data that is informative for a specific task in order to limit the required sensing cost, as well as the related costs of storing, processing, or communicating the data.
Event Status
Scheduled
Nov. 13, 2015, All Day
Abstract: Submodular functions capture a wide spectrum of discrete problems in machine learning, signal processing and computer vision. They are characterized by intuitive notions of diminishing returns and economies of scale, and often lead to practical algorithms with theoretical guarantees.
In the first part of this talk, I will give a general introduction to the concept of submodular functions, their optimization and example applications in machine learning.
Event Status
Scheduled
Nov. 6, 2015, All Day
Abstract: Many machine learning tasks can be posed as structured prediction, where the goal is to predict a labeling or structured object. For example, the input may be an image or a sentence, and the output is a labeling such as an assignment of each pixel in the image to foreground or background, or the parse tree for the sentence.
Event Status
Scheduled
Oct. 30, 2015, All Day
no results
Event Status
Scheduled
Oct. 23, 2015, All Day
Abstract: Sampling is a standard approach to big graph analytics. Buta good sample need to represent graph properties of interest with aknown degree of accuracy. This talk describes a generic tunablesampling framework, graph sample and hold, that applies to graphstream sampling in which edges are presented one at a time, and fromwhich unbiased estimators of graph properties can be produced inpost-processing. The talk also describes the performance of the methodon various types of graph, including social graphs, amongst others.
Watch the full presentation on the WNCG YouTube Channel.
Event Status
Scheduled
Oct. 16, 2015, All Day
The 13th annual Texas Wireless Summit (TWS) provides a forum on emerging technology and business models for industry leaders and academics. Hosted by the University of Texas at Austin's Wireless Networking and Communications Group (WNCG), the Summit offers direct access to cutting-edge research and innovations from industry leaders, investors, academics and startups.
Event Status
Scheduled
Oct. 7, 2015, All Day
Abstract: Given samples from an unknown distribution, p, is it possible to distinguish whether p belongs to some class of distributions C versus p being far from every distribution in C, by at least ε in total variation distance? This fundamental question has received substantial attention in Statistics and Computer Science. Nevertheless, even for basic classes of distributions such as monotone, log-concave, unimodal, or product, the optimal sample complexity is unknown. We provide optimal testers for these families.
(joint work with Jayadev Acharya and Gautam Kamath).
Event Status
Scheduled
Sept. 25, 2015, All Day
We consider the task of summing (integrating) a non-negative function over a discrete domain, e.g., to compute the partition function of a graphical model.
Event Status
Scheduled
Sept. 18, 2015, All Day
Fitting a low-rank matrix to data is a fundamental and widely used primitive in machine learning. For most problems beyond the very basic PCA, theoretically sound methods have overwhelmingly combined statistical models of the data with convex optimization. As the size and dimensionality of data increases, this approach is overly computationally wasteful, not least because it represents an nr dimensional object with n^2 parameters.