Agenda at a Glance

The agenda will continue to be updated as we get closer to the workshop.

Day 1 - Wednesday, October 20th

*Please join at 8:55AM PT for day-of instructions as we will start the sessions promptly at 9AM PT

8:55* - 9:00am PT | Welcome by Mehryar Mohri & Kristen Konrad 

9:00am-9:12am PT | Talk by Ohad Shamir - Elephant in the Room: Non-Smooth Non-Convex Optimization

9:12am-9:24am PT | Talk by Jean-Philippe Vert - Framing RNN as a kernel method: A neural ODE approach

9:24am-9:36am PT | Talk by Matus Telgarsky - Natural policy gradient is implicitly biased towards high entropy optimal policies

9:36am-9:48am PT | Talk by Surbhi Goel - What functions do self-attention blocks prefer to represent?

9:48am-10:00am PT | Talk by Lechao Xiao - Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks

10:00-10:15am PT | Break in Gather.Town

10:15-10:27am PT | Talk by Nicolo Cesa-Bianchi Multitask Online Mirror Descent

10:27-10:39am PT | Talk by Greg Valiant - New Problems and Perspectives on Sampling, Learning, and Memory

10:39-10:51am PT | Talk by Steve Hanneke - A Theory of Universal Learning

10:51-11:03am PT | Talk by Ellen Vitercik - Theoretical Foundations of Data-Driven Algorithm Design

11:03-11:15am PT | Talk by Sergei Vassilvitski - Warm start with Predictions

11:15-11:30am PT | Break in Gather.Town

11:30-11:42am PT | Talk by Wen Sun- Efficient Reinforcement Learning via Representation Learning

11:42-11:54am PT | Talk by Ashok Cutkosky- Online Learning with Hints

11:54-12:06pm PT | Talk by Xinyi Chen- Provable Regret Bounds for Deep Online Learning and Control

12:06-12:18pm PT | Talk by Praneeth Netrapalli- Streaming Estimation with Markovian Data: Limits and Algorithms 

12:18-12:30pm PT | Talk by Akshay Krishnamurthy- Efficient first-order contextual bandits

12:30-1:0pm PT | Breakout discussions by - Elad Hazan (Deep Learning), Ananda Theertha Suresh (privacy), Gergeley Neu (RL), Chris Dann (RL), Satyen Kale (optimization), Sergei Vassilvitski (privacy), Matus Telgarsky (deep learning), Jamie Morgenstern (fairness), Greg Valiant (Generalization), Jacob Abernethy (fairness)

Day 2 - Thursday, October 21st

*Please join at 8:55AM PT for day-of instructions as we will start the sessions promptly at 9AM PT

8:55*-9:00am PT | Welcome by Pranjal Awasthi

9:00am-9:12am PT | Talk by Peter Kairouz - Distributed Differential Privacy for Federated Learning

9:12am-9:24am PT | Talk by Varun Kanande - The Statistical Complexity of Early-Stopped Mirror Descent

9:24am-9:36am PT | Talk by Quanquan Gu - Benign Overfitting of Constant-Stepsize SGD for Linear Regression

9:36am-9:48am PT | Talk by Satyen Kale - A Deep Conditioning Treatment of Neural Networks

9:48am-10:00am PT | Talk by Jascha-Sohl Dickstein- Learned optimizers: why they're hard and why they're the future

10:00-10:15am PT | Break in Gather.Town

10:15-10:45am PT | Keynote by Jon Kleinberg

10:45-10:50am PT | Break

10:50-11:02pm PT | Talk by Peter Bartlett- Adversarial examples in deep networks

11:02-11:14pm PT | Talk by Aravindan Vijayaraghavan- Algorithms for learning depth-2 neural networks with general ReLU activations

11:14-11:26pm PT | Talk by Ananda Theertha Suresh- Learning with user-level differential privacy

11:26-11:38pm PT | Talk by Raman Arora- Machine Unlearning via Algorithmic Stability

10:50-11:50pm PT | Talk by Jamie Morgenstern - Individualization, persuasion, and polarization

11:50 - 12:00pm PT | Break in Gather.Town

12:00 - 12:12pm PT I Talk by Jon Schneider- Strategizing Against No-Regret Learners

12:12 - 12:24pm PT I Talk by Naman Agarwal - A Regret Minimization Approach to Iterative Learning Control

12:24 - 12:36pm PT I Talk by Wouter KoolenA/B/n Testing with Control in the Presence of Subpopulations

12:36 - 12:48pm PT I Talk by Haipeng Luo- The Best of Both Worlds: Stochastic and Adversarial Episodic MDPs with Unknown Transition

12:48 - 1:00pm PT I Talk by Chris Dann- Agnostic RL in Low-Rank MDPs with Rich Observations

1:00pm PT I Closing