{"id":33554,"date":"2021-12-02T07:13:24","date_gmt":"2021-12-02T12:13:24","guid":{"rendered":"https:\/\/centricconsulting.com\/?p=33554"},"modified":"2023-08-18T13:00:06","modified_gmt":"2023-08-18T17:00:06","slug":"managing-model-drift-through-mlops","status":"publish","type":"post","link":"https:\/\/centricconsulting.com\/blog\/managing-model-drift-through-mlops\/","title":{"rendered":"Managing Model Drift Through MLOps"},"content":{"rendered":"

We know machine learning models can make your life easier, but what happens when these become less effective over time? In this blog, we discuss model drift: what it is, why it happens, and what you can do about it.<\/h2>\n
\n

In machine learning<\/a> (ML), you will hear a lot of talk about MLOps, a discipline borrowed from the more traditional IT DevOps that concentrates on delivering high-quality software development on a rapid and continual basis.<\/p>\n

What distinguishes MLOps from DevOps and makes it a value-adding enterprise in the ML space is the former\u2019s concentration on capabilities specific to the end-to-end ML lifecycle. This includes not only the research and development for the model but also its deployment and post-implementation support.<\/strong> With ML, the latter entails more than making sure your code continues to run smoothly and uninterruptedly but also monitoring and automated retraining.<\/p>\n

You may think, \u201cBut my model is pretty good as it is. Why does it need to be monitored and retrained?\u201d<\/p>\n

There are all sorts of environmental factors that can cause models to become less effective over time through no fault of their own. When this happens, we call it model drift.<\/strong><\/p>\n

When you deploy a model, you implicitly assume that the statistical relationships observed between the features in your data and the outcome variable will consistently hold over time. As circumstances change – as they virtually always do in any complex system – so, too, do these relationships.<\/p>\n

In this article, we will walk you through a real-world example of model drift. We will then discuss why drift occurs and some remedies you can explore for how to handle it within the MLOps framework.<\/p>\n

What Does Model Drift Look Like?<\/h2>\n

We once worked with a client to build and “productionize” an ML model for their sales call center. The goal was to increase the number of good calls their employees experienced, where they defined a good call as meeting one of several criteria, including whether the call led to the prospect becoming a lead or an opportunity.<\/p>\n

They could then use the model output to stack-rank their list of prospects at the beginning of each day and prioritize who were the most likely people that sales members would have a good call with. Afterward, we monitored the model performance for several months and observed several interesting findings.<\/strong><\/p>\n

First, as expected, the model improved the number of good calls their sales staff experienced by over 50 percent. Second, the model began to drift within a few months of initial deployment. Third, the drift was most notably pronounced downstream when we looked at the sales pipeline to determine what calls ultimately translated into leads and opportunities. In other words, the impact of the drift visibly translated to their bottom line.<\/p>\n

\"MLOps<\/a><\/p>\n

Why Does Model Drift Happen?<\/h2>\n

There are several reasons why drift can occur:<\/p>\n

Changes to Business Processes<\/a><\/h3>\n

This could be from an improvement to technology, a desire to drive down inefficiencies, new personnel or a change in strategy. The very use of the model itself could trigger changes to your business\u2019 status quo<\/strong>. For example, we once built an ML model for a client as part of their customer retention initiative, which they would use to help them identify customers proactively and talk them out of leaving. Suffice to say, the circumstances under which we trained the model ended up being different from those under which they used it.<\/p>\n

It turns out our call center client above, shortly after their model\u2019s deployment, had adjusted their marketing strategy to focus more on driving traffic to their website via online outlets. This was something the model would not have seen in the training data.<\/p>\n

Changes to Target Behavior<\/h3>\n

Another one of our previous clients was working to mitigate the shortage of truck drivers in their fleet and had us build an ML model to determine who was at risk of quitting. This model quickly lost a lot of its predictive ability in 2020, however, when the landscape changed with the COVID-19 pandemic.<\/strong> With the closure of restaurants and facilities, a shortage of truck drivers soon became a surplus.<\/p>\n

Changes to the Composition of the Observations Comprising the Data<\/h3>\n

If you are building an ML model to augment your marketing efforts, for example, and 20 percent of your training data comprise prospects from trade shows and 80 percent from purchased leads, much of the model’s performance will be driven by those prospects in the latter group. This can create problems if you primarily apply the model to prospects in trade shows, and those clients typically behave differently than purchased leads.<\/strong><\/p>\n

In the case of our call center client\u2019s prospects, one reason for the drift was simply because, by design, it would pick the cream of the crop among the most promising prospects on any given day. Eventually, though, as you go down the list, you start to observe diminishing returns by dialing fewer promising candidates.<\/p>\n

Changes to How You Collect the Data<\/h3>\n

If your model depends on select data elements, and you change the way you capture or interpret these elements, these changes can cause drift. Without retraining your model on the newer data format, it will have no way of continuing to pick up those signals.<\/p>\n

This is a particularly noteworthy call out if you are having your data scientists<\/a> hand off their code to a separate team of developers for deployment. Your model\u2019s performance could well suffer if the latter fails to replicate the feature engineering process verbatim properly.<\/strong><\/p>\n

Whatever the reason for the drift, you have options for how to manage it that fortunately go beyond simple scrapping your model and writing it off as another lesson learned.<\/p>\n

Retrain or Rebuild Your ML Model?<\/h2>\n

After helping you to identify model drift, your MLOps can then help you manage it. As part of its ML lifecycle, your MLOps practice should not only have measures in place for monitoring your model but also retraining it. You can reschedule the latter regularly or trigger it ad hoc under predetermined, select conditions related to your model\u2019s performance. It should also have a process in place for comparing the newly trained model against your currently existing model.<\/p>\n

This last step is crucial, as you can\u2019t assume training your model on newer data will fix the problem.<\/strong> It\u2019s possible that retraining your model on more recent data not only fails to provide a lift but worsens the performance. If drift persists even after retraining, it may be time to go back to the drawing board and revisit the model design methodology \u2013 the type of model you are using, the predictors you include in the model, and so on.<\/p>\n

Once your MLOps practice has matured to a point that your retraining regimen runs like clockwork, you can automate on this front, as well. As long as you fix your selection criteria, you can set up a pipeline that compiles your training data, retrains the model, compares its performance with the current model, and uses those criteria to determine whether to replace the latter with the former.<\/p>\n

We were able to automate our call center client\u2019s retraining process and found using newer data could restore some, if not all, of the model\u2019s predictive power. We even incorporated a control group into the design for better facilitating performance comparisons between competing models on an ongoing basis after that.<\/p>\n

\"MLOps\"<\/a><\/p>\n

Do You Get Our (Model) Drift?<\/h2>\n

When it comes to MLOps, it\u2019s all about capabilities for helping you to translate your ML insights to business value. Enabling your ML practice to not just get off the ground, but keep it there, requires monitoring and retraining as part of your process. Because in a complex world that is constantly changing, you must assume model drift as the norm rather than the exception.<\/strong><\/p>\n

With enough experience and maturity, you can codify and automate the entire cycle of monitoring, retraining and replacing your model with MLOps.<\/p>\n\n

\n
\n Need help determining how ChatGPT and Artificial Intelligence should fit in your organization? Make sure you aren\u2019t left behind as companies across the globe continue to embrace these productivity enhancing technologies. In this on-demand webinar, our Artificial Intelligence expert provides an executive\u2019s guide to what leaders need to know about adopting ChatGPT and AI in the workplace. \n <\/div>\n
\n \n\n View Webinar\n <\/a>\n <\/div>\n <\/div>\n
<\/div>\n
<\/div>\n","protected":false},"excerpt":{"rendered":"

We know machine learning models can make your life easier, but what happens when these become less effective over time? We call it model drift.<\/p>\n","protected":false},"author":350,"featured_media":33558,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_oasis_is_in_workflow":0,"_oasis_original":0,"_oasis_task_priority":"","_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","footnotes":""},"categories":[1],"tags":[18616,19123],"coauthors":[23400],"acf":[],"publishpress_future_action":{"enabled":false,"date":"2024-07-21 22:17:54","action":"change-status","newStatus":"draft","terms":[],"taxonomy":"category"},"_links":{"self":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/33554"}],"collection":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/users\/350"}],"replies":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/comments?post=33554"}],"version-history":[{"count":0,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/posts\/33554\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media\/33558"}],"wp:attachment":[{"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/media?parent=33554"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/categories?post=33554"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/tags?post=33554"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/centricconsulting.com\/wp-json\/wp\/v2\/coauthors?post=33554"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}