Agenda
Welcoming Remarks &
Overiew of Google Developers Program in North America
About Kyle
Kyle works for Google in Mountain View, California. His job on the Developer Relations team is to support awesome developer communities, like the Google Developer Experts and Developer Student Clubs. In his spare time he enjoys building and hacking on the web, playing with his two cats, and photographing the outdoors. Before Google, Kyle was a startup founder, organizer for GDG Kansas City, and a Google Developer Expert (GDE).
Time-Series Forecasting using the Google Cloud AI Platform
Predicting the future has always been a fascinating topic. Now we have AI tools and techniques that can help us do it better than ever before. In this session, we'll cover the fundamentals of solving time-series problems with AI, and show how it can be done with popular data science tools such as Pandas, TensorFlow, and the Google Cloud AI Platform. We'll start with how to visualize, transform, and split time-series data for use in an ML model. We'll also discuss both statistical and machine learning techniques for predictive analytics. Finally, we'll show how to train a demand forecasting model in the cloud and make predictions with it. Attendees can access Jupyter notebooks after the session to review the material in more detail.
About Karl
Karl Weinmeister is a Cloud AI Advocacy Manager at Google, where he leads a team of data science experts who develop content and engage with communities worldwide. Karl has worked extensively in machine learning and cloud technologies. He was a contributor to one of the first AI-based crossword puzzle solvers that is still referenced today.
Scaling your TFX Training Pipeline using TPU Pods
TFX is a Google-production-scale machine learning (ML) platform based on TensorFlow. Scaling your TFX Training Pipeline is a must for certain use-cases and industry scenarios. Traditionally, ML Engineers on GCP can use TPUs on Cloud TPU VM instance cluster of workers. In this talk, I will show how to scale your TFX Training pipeline using TPU Pods v2 and v3 via AI Platform Jobs. AI Platform Jobs promotes an on-demand serverless approach for managing training tasks, costs and resources consumption.
About Carlos
Carlos is a Machine Learning Engineer at Pythian. He attains Cloud Architect, Data Engineer Google Cloud Professional certifications. Also, he is AI/ML Consultant to SMB in the Americas. Nowadays, he builds and deploys ML system in production using Google Cloud Platform. As GDE in ML and GDG Cloud organizer, he trains PhD students in Brazil and Portugal on Data Science.
11.17 am - 11.27 am
Intermission / Break
Driving better user experience via the reward function of RL-based recommenders
Can we design recommenders that encourage user trajectories aligned with the true underlying user utilities? Besides engagement, user satisfaction and responsibility arise as important pillars of the recommendation problem. Motivated by this, we will discuss various efforts utilizing the reward function as an important lever of Reinforcement Learning (RL)-based recommenders, so to guide the model learning that for certain states (i.e., latent user representation at a certain point of the trajectory) certain actions (i.e., items to recommend) will bring higher user utility than others. We will also outline current and future directions on overcoming challenges of signals’ sparsity and interplay among various reward signals.
About Konstantina
I am a Research Engineer at Google, leading several efforts on recommender systems and reinforcement learning in Google Brain. I am leading research and product engagements applying Reinforcement Learning to increase user satisfaction with Google products as well as make the systems socially responsible.
ML Engineering for Production ML Deployments
Delivering the results of advanced Machine Learning technology to customers requires a rigorous approach and production-ready systems. This is especially true for maintaining and improving model performance over the lifetime of a production application. Unfortunately, the issues involved and approaches available are often poorly understood. An ML application in production must address all of the issues of modern software development methodology, as well as issues unique to ML and data science. Often ML applications are developed using tools and systems which suffer from inherent limitations in testability, scalability across clusters, training/serving skew, and the modularity and reusability of components. In addition, ML application measurement often emphasizes top level metrics, leading to issues in model fairness as well as predictive performance across user segments. We discuss the use of ML pipeline architectures for implementing production ML applications, and in particular we review Google’s experience with TensorFlow Extended (TFX), as well as the advantages of containerizing pipeline architectures using platforms such as Kubeflow. Google uses TFX for large scale ML applications, and offers an open-source version to the community. TFX scales to very large training sets and very high request volumes, and enables strong software methodology including testability, hot versioning, and deep performance analysis.
About Robert
A data scientist and TensorFlow addict, Robert has a passion for helping developers quickly learn what they need to be productive. He's used TensorFlow since the very early days and is excited about how it's evolving quickly to become even better than it already is. Before moving to data science Robert led software engineering teams for both large and small companies, always focusing on clean, elegant solutions to well-defined needs. You can find him on Twitter at @robert_crowe.
Machine Learning: Image recognition on the Raspberry Pi
Are you a hobbyist that wants to add some machine learning to your projects? Are you a machine learning engineer and want to add some hardware to your project? Are you just interested in either? This talk is for you. I will walk through the process of setting up a Raspberry Pi, connecting a camera, adding the proper machine learning libraries (TensorFlow and Keras), and then implementing a pre-trained image recognition network. When it is all done, you will be able to use your Pi to detect what object it is seeing.
About Evan
Evan Hennis is a Google Developer Expert in Machine Learning and an international speaker. He has a Master's degree in Computer Science with a specialization in machine learning from Georgia Tech. He can be reached on Twitter at @TheNurl or followed on his blog at http://blog.eckronsoftware.com/.
Introduction to Swift for Tensorflow
We will use Swift for TensorFlow to build a simple neural network and use it to categorize MNIST digits, then look at how we can extend our approach to run on different hardware using Google's cloud. Along the way, we will look at how Swift works with the LLVM compiler and automatic differentiation to make it easier to reason about our code.
About Brett
Brett Koonce is the CTO/co-founder of Quarkworks, a mobile consulting agency. He has worked on dozens of apps and contributed code to many different open source projects. He enjoys building and scaling teams to solve interesting problems. His upcoming book "Convolutional Neural Networks with Swift for TensorFlow" is available for preorder from Apress.