Agenda

8.00am - 9.00am

Check-in and Breakfast

9.00am - 9.30am

Welcome from Google Developers team

Naveen Nigam

Head of North America Developer Ecosystem, Google

9.30am - 10.00am

Embedding-Based Classifiers for Large Output Spaces

30min • Ali Mousavi, AI Resident at Google Brain

In this talk, Ali will discuss the problem of multi-label classification with large output space, which has garnered significant attention in recent years. This problem differs from the traditional classification setting insofar that the number of labels is potentially in the millions, presenting significant computational challenges. Many real world applications such as product recommendation and text retrieval can be formulated under this framework and thus, practical solutions to this problem can have significant and far-reaching impact.

10.00am - 10.30am

Face Detection, Tracking and Redaction using Deep Neural Networks

30min • Krishnan SPT, Director, AI/ML at Hitachi Vantara, and Google Developer Expert

While there is a plethora of open source libraries that aim to detect faces, in reality many of these approaches do not work on a typical set of facial postures such as partial faces, face side profiles, small faces etc. However, detection of such faces is crucial if the evidential videos are to be publicly distributed by law enforcement agencies.

In this study, Hitachi Vantara's AI/ML team undertook the challenge of assembling together a solution that detects all recognizable faces (including partial faces, profile faces) irrespective of their relative sizes in the video. Our solution comprises of 3 different neural networks that are tuned to detect faces of different characteristics.

10.30am - 10.45am

Break

10.45am - 11.45am

Feature engineering in BigQuery and TensorFlow 2.0/Keras

1hr • Lak Lakshmanan, Tech Lead, Big Data and ML Professional Services, Google Cloud

11.45am - 1.00pm

Lunch

10.00am - 10.30am

Face Detection, Tracking and Redaction using Deep Neural Networks

30min • Krishnan SPT, Director, AI/ML at Hitachi Vantara, and Google Developer Expert

In this study, Hitachi Vantara's AI/ML team undertook the challenge of assembling together a solution that detects all recognizable faces (including partial faces, profile faces) irrespective of their relative sizes in the video. Our solution comprises of 3 different neural networks that are tuned to detect faces of different characteristics. In addition to redacting faces, we have also implemented face tracking. This capability allows a law enforcement officer to be able to select one or more facial profiles that the system will not redact but redact all other faces. This capability is useful to track a Person-of-Interest while preserving the privacy of other individuals in the video. We used a form of facial fingerprint technology to be able to uniquely identify faces in the video. We have even handled edge cases of distinguishing between people who look similar by associating their geo-spatial characteristics. Our system also has the ability to redact human faces.

1.00pm - 1.30pm

Building Data Foundation for ML

30min • Amy Krishnamohan, Senior Product Marketing Manager at Google Cloud Databases

Many data scientists spend 80% of their time preparing data. It is important to build scalable and extensible data platform so that data scientists can leverage various data without any contstraints. In this session, we will review different types of data platforms from databases to data warehouse to build a great foundataion for ML.

1.30pm - 2.00pm

Ease ML Deployments with TensorFlow Serving

30min • Hannes Hapke, VP of AI and Engineering at Caravel, and Google Developer Expert

TensorFlow Serving is one of the cornerstones in the TensorFlow ecosystem. It has eased the deployment of machine learning models tremendously and led to an acceleration of model deployments.In this talk, Hannes shares his experience in deploying TensorFlow Serving in a variety of setups and deep learning models.

2.00pm - 2.15pm

Break

2.15pm - 2.45pm

Deep Learning for Robot Navigation

30min • Aleksandra Faust, Staff Research Scientist at Google Brain Robotics

Learn how to teach the robots to navigate in real-world, ever-changing environments safely with automated reinforcement learning (AutoRL). First, we use genetic algorithms to learn reward and problem formulation. Next, deep reinforcement learning learns basic navigation skills that transfer between the environments. After AutoRL trains the basic skills, the agent learns to predict its own proficiency and difficulty of tasks from the experience, and use those predictions to improve its performance over time in new environments.

2.45pm - 3:15pm

Using AI for Accessibility & Customization by Voice

30min • Stephen Wylie, ML Sofware Engineer and Google Developer Expert

Voice is a powerful tool for human-computer interaction, though analyzing it is often limited to an API call to a model that may not understand application jargon (like `sudo rm -rf *`) or words not spoken in clear American English. Wield the power of voice your way with a bespoke speech model trained on your own examples, and imagine the possibilities for your users in terms of customization, accessibility, and incorporating application-specific terms that may not be well-recognized by existing speech models.

3.15pm - 3.45pm

TF 2.0: Transitioning to Production

30min • Andrew Ferlitsch, Google Cloud AI Developer Program Engineer

TF 2.0 introduces a number of new features, recommendations and simplification which provide the backbone for your team to deploy the infrastructure for moving your models into full-scale production environment. In 2017, most companies were in the AI planning phase. In 2018, most transitioned into exploratory phase. In 2019, companies are moving into production. By 2020, if your company is not production, you're not a player in the market.

In this talk, Andrew will be talking how to use the principles behind TF 2.0 to go into production, and will cover topics like: versioning, warmups, retraining, moving upstream pipelines into the model, pipelining of models, distributed batch prediction, and extending existing models to fit your production environment.

3.45pm - 4.00pm

Break

4.00pm - 4.30pm

Convolutional Neural Networks with Swift

30min • Brett Koonce, CTO at Quarkworks, and Google Developer Expert

In this talk, Brett will show how to build a neural network using swift for tensorflow and google cloud to solve a simple image recognition problem, then gradually add complexity to build up to state of the art approaches in this field.

4:30pm - 5:00pm

Panel Discussion

30min • Lak Lakshmanan, Aleksandra Faust, Ali Mousavi, Andrew Ferlitsch 

Our panel of Googlers will discuss their experiences and work in ML/AI fields, and will open it up to questions from the audience, making for a highly interactive session.

5.00pm - 5.15pm

Closing Remarks

5.15pm - 6.00pm

Happy Hour and Networking