Model deployment patterns: A Hands-On Guide to Dynamic Deployment on Google Kubernetes Engine [4/4]

Model deployment patterns: A Hands-On Guide to Dynamic Deployment on Google Kubernetes Engine [4/4]

As we conclude our series on model deployment patterns, this final post explores one of the most widely used approaches: dynamic deployment on Google Kubernetes Engine (GKE). Building on the simplicity of static deployment and the versatility of dynamic deployment across platforms, this post takes a hands-on approach. You’ll implement dynamic deployment step by step…

Model deployment patterns: hands-on dynamic deployment using multi-containers [3/4]

Model deployment patterns: hands-on dynamic deployment using multi-containers [3/4]

Continuing our series on model deployment patterns, the previous post covered dynamic deployment and its various implementations—whether on a virtual machine, in a container, or using a serverless approach. In this post, we’ll explore a practical example of dynamic deployment, examining each component and reviewing the results. There are several approaches to building a dynamic…

Model deployment patterns: Dynamic deployment [2/4]

Model deployment patterns: Dynamic deployment [2/4]

In the second deployment pattern, we’ll understand the most commonly used pattern: dynamic deployment. In dynamic deployment, the model is clearly separated from the other components of the application. This means the model can be updated without updating the entire application. This approach allows for better separation of concerns and provides more control to the…

Model deployment patterns: Static deployment [1/4]

Model deployment patterns: Static deployment [1/4]

To provide business value and achieve real-world benefits, a crucial aspect of delivering a machine learning system is model deployment. Deploying a model involves making it available to accept queries from users of the production system [1]. Models are developed in a research environment and deployed in a production environment through the model deployment stage….

How to productize your machine learning model using Scikit-learn? [2/2]

How to productize your machine learning model using Scikit-learn? [2/2]

As we saw in How to productize your machine learning model using Scikit-learn [1/2], it is crucial that production-ready code is versionable, testable, manageable, and integrable. After understanding these concepts and why it is important not to use Jupyter notebooks to productize your model pipeline, let’s look at a practical example of transforming prototype code…

How to productize your machine learning model using Scikit-learn? [1/2]

How to productize your machine learning model using Scikit-learn? [1/2]

“IT leaders responsible for AI are discovering ‘AI pilot paradox’, where launching pilots is deceptively easy but deploying them into production is notoriously challenging”. Chirag Dekate, Senior Director Analyst, Gartner VentureBeat reported that 87% of data science projects never reach production [1], and Gartner predicted in 2018 that by 2022, 85% of AI projects would yield incorrect…