Sometimes centrally collecting data produced by edge devices, such as mobile phones, wearables, or cars, is infeasible or undesirable. With federated learning and analytics, clients collaboratively train a model or compute an analytic (a stastic) under the coordination of a server, while keeping the training data decentralized and mitigating privacy risks.
This workshop will present the latest and greatest research from Google in an easy to understand and use manner. The topics will include:
- [Federated Optimization] We will be talking about how to design and evaluate federated optimization algorithms, with a focus on best practices and baseline federated optimization tasks.
- [Personalization] We will also overview personalization approaches in federated learning, introducing Federated Reconstruction (FedRecon) -- a method for performing partially local federated learning.
- [Differential Privacy] We will also talk about training federated learning models with differential privacy under real life system constraints.
- [Federated Analytics] We will also tell you all about the emerging topic of federated analytics and how to discover heavy hitters in a federated fashion and under privacy constraints.
In addition to presenting our latest research related to the topics above, we will show you how to use TensorFlow Federated (TFF), an open-source framework for machine learning and other computations on decentralized data, to explore federated learning and analytics. You will see simple examples of how TFF can be used to enable new research. After this tutorial, you will be equipped to further experiment with federated learning and analytics on your own.
Last but not least, you will have an opportunity to ask research and TFF questions in a session dedicated to Q&A. [Registration is now closed]