Productionizing ML with ML Ops and Cloud AI
The hardest part of ML adoption in enterprises is Productinization. As we see in recent discussions around ML Ops, there is a big gap between Data Scientists' PoC code and production ML development and operation with the Ops team. Such as, preparing a manageable ML dev environment, building a scalable ML serving infrastructure, setting up a ML pipeline for continuous training, and automated validation of data and model. In this session, we will learn how to leverage various Google's ML/AI offerings such as TensorFlow Extension (TFX), TensorFlow Enterprise, Cloud AI Platform Notebooks, Training, Prediction, and Pipelines for productionizing your ML service with the ML Ops best practices.
Kaz Sato is Staff Developer Advocate at Google Cloud for machine learning and AI products, such as TensorFlow, Cloud AI and BigQuery. Kaz has been invited as a speaker at major events including Google Cloud Next, Google I/O, NVIDIA GTC and etc. Also, authoring many GCP blog posts, supporting developer communities for Google Cloud for over 9 years. He is also interested in hardwares and IoT, and has been hosting FPGA meetups since 2013.