1:00PM - 2:00PM
In recent years, mobile devices are equipped with increasingly advanced computing capabilities, which opens up countless possibilities for meaningful applications. Traditional cloud-based Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, multi-access edge computing (MEC) has been proposed to bring intelligence closer to the edge, where data is originally generated. However, conventional edge ML technologies still require personal data to be shared with edge servers. Recently, in light of increasing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train a local ML model required by the server. The end devices then send the local model updates instead of raw data to the server for aggregation. FL can serve as enabling technology in MEC since it enables the collaborative training of an ML model and also enables ML for mobile edge network optimization. However, in a large-scale and complex mobile edge network, FL still faces implementation challenges with regard to communication costs and resource allocation. In this talk, we begin with an introduction to the background and fundamentals of FL. Then, we discuss several potential challenges for FL implementation such as unsupervised FL and matching game based multi-task FL. In addition, we study the extension to Federated Analysis (FA) with potential applications such as federated skewness analytics and federated anomaly detection.
In-Person: UT Austin, EER 3.640/3.642 – (North Tower, "ExxonMobil Longhorn Room")
Virtual Access: https://utexas.zoom.us/j/94632832903