/plushcap/analysis/arize/arize-ml-infrastructure-tools-for-production-part-2-model-deployment-and-serving

ML Infrastructure Tools for Production: Part 2 — Model Deployment and Serving

What's this blog post about?

The article discusses the various stages of Machine Learning (ML) infrastructure and their functions across the model building workflow. It highlights that ML Infrastructure platforms are crucial for businesses looking to leverage AI effectively. The three main stages of the ML workflow include data preparation, model building, and production. Each stage has specific goals and challenges that need to be addressed when choosing an appropriate ML Infrastructure platform. The article then delves into Model Deployment and Serving, which is the final step in the ML process. It explains various serving options for models, such as internally built executables, cloud ML providers, batch or stream hosting solutions, and open-source platforms. The decision on which model serving option to choose depends on factors like data security requirements, managed or unmanaged solutions, compatibility with other systems, and the need for GPU inference. The article also discusses deployment details, implementation methods, containerization, real-time or batch models, and features to look for in model servers. It concludes by mentioning some ML Infrastructure platforms for Deployment and Serving, such as Datarobot, H2O.ai, Sagemaker, Azure, Google Kubeflow, and Tecton AI.

Company
Arize

Date published
Sept. 17, 2020

Author(s)
Krystal Kirkland

Word count
2212

Hacker News points
None found.

Language
English


By Matt Makai. 2021-2024.