Analytic and modeling features permeate the many SaaS platforms in the marketing industry and advances in technology have made machine learning tools widely accessible. As the industry leans into data-driven marketing and predictive modeling, marketers often find themselves relying heavily on automated analytic tools without realizing the potential hazards. Whether starting off with a “magic box” lookalike modeling solution, or implementing advanced analytic workflows leveraging multichannel data, maintaining a human touch is critical to success.

 

Having a human element will also empower you to build and test several different solutions, ultimately choosing the best.

A useful analogy for approaching automated analytic tools handle and activate data similar is to consider how a Tesla owner might use some of the advanced features of the vehicle. Auto-pilot is a very real feature that can assist them down the road, and fully-automated self-driving is on the horizon, but for now the driver still need to be present to helm the wheel. It may be tempting to look away for an extended time, but keeping human eyes on the road is a better guarantee of a safe arrival. Similarly, a ‘set it and forget it’ approach to data activation will never deliver the marketing results we desire. Expert data scientists and strategists can add unmatched value to your marketing execution — and ultimately, to your bottom line.

Whether you have in-house data experts, lean -n partners, or are wondering if you should add a data scientist to your team, here are five important reasons why you need human involvement in your data workflows:

Reason #1: Algorithms aren’t consultative

Machine learning tools are great at ingesting enormous quantities of data and making sense of it. However, an algorithm can only make assessments based on the data it is supplied. ML tools can’t look outside of that data set and consider evolving regulations or cultural changes. They can’t help you develop the right objectives or success metrics. And they won’t be able to look at the impact of your unique business processes — a data scientist who understands the nuances of your business, from product, to creative and compliance, will be able to maximize the value of the data.

The consultation should start before a model is even estimated, beginning with defining the model development data set and the appropriate dependent variable for the chosen KPI. Aligning available data sources like past campaign responders or various lead streams with the campaign objective will help narrow the development data set. Consulting with a data expert can help you isolate the right dependent variable while also providing guidance on what may happen when you choose one over another. For example, how modeling for higher LTV customers will lower short-term response rates. Data scientists can also add value by suggesting a screen — or secondary model for other lagging indicators. Doing so can help balance results, protecting one KPI from tanking while another thrives. Without a human element to your analytics, the opportunity to have strategic conversations may be missed.

Reason #2: Creativity isn’t reserved solely for Marketing teams

Plug and play solutions are one-size fits all and often lack imagination when it could be beneficial. What if you don’t have the exact data points in your model development sample to create the desired dependent variable? If you only have a “magic box” modeling tool, then you’ve hit a wall and are out of luck. A qualified analyst can evaluate the situation and potentially construct a proxy using available data. While having rich model development data is ideal, a creative approach can push you forward when you would otherwise be stuck.

A human element can also empower you to build and test several different solutions. Various test cases can evaluate different algorithms, dependent variables, screens, input data sets and more. Quick and easy modeling tools don’t synthesize new ideas or applications. If you are in need of something different or additive to an existing solution, rerunning within the same template is not going to generate different results.

Reason #3: QA won’t happen by itself

Remember the old adage, garbage in – garbage out. If flawed input data flows into the system it will be subject to all sorts of issues, and essentially rendered useless. Worse, bad data may go unnoticed and any models generated would be sub-optimal. Having a team to manage data hygiene and identify potential errors will save you many headaches during development and execution. This is especially true if you are matching data sets from different databases or silos within your organization. Being hands-on with the data early on will also provide an opportunity to evaluate which data sets will drive the best results, and which might just be noise.

Similarly, having a team that monitors and validates model results is necessary as well. This might be a new concept for those that have only used platform-based modeling where you don’t have a chance for QA. Even with clean and correct input data, it is possible for things to go awry in processing. A trained eye will be able to evaluate QA reports to validate model outputs, and further investigate any outliers or anomalies. Let’s say you are modeling for digital buying behavior, but you decide to include customers who had also ordered offline in the model development sample to bolster the seed size. A human would assess if the model became too biased towards the offline behavior and adjust as needed. All of this will provide you further assurance when it comes time to activate.

Reason #4: More advanced models require analytic expertise

Lookalike modeling is a powerful tool and one that Alliant often deploys for clients. But with constant evolution of technologies and strategies, there are many powerful new data analysis and modeling techniques available. As your business evolves you will likely want to take advantage of these and begin predicting performance for specific KPIs, or leveraging ensemble methods. For instance multi-behavioral can optimize for multiple consumer actions. Innovative applications like these require more than a simple upload of data into a lookalike modeling solution.

Reason #5: Things won’t always go as planned

If 2020 has taught us anything, it is that you can never be 100% sure of what will happen once you go live. Having resources available to assess the situation and make adjustments on the fly can turn potential errors into positives. In uncertain times it is unlikely you will have a data set to assist with prediction. It is ultimately up to humans to figure out how to adjust — and bring machine learning tools along for the ride.

Interested in learning more about how you can partner with Alliant’s data scientists to build custom data solutions? Contact us at any time! Our team has been on an analytic evolution, enabling the data scientists to take predictive modeling to new places and ultimately creating stronger solutions for our partners.

ABOUT THE AUTHOR

Malcolm Houtz, VP Data Science

As head of Alliant’s data science team, Malcolm balances a high volume of complex model development and analytic projects every month — while maintaining a laser focus on innovation and excellence. Malcolm is a critical thinker with an insatiable curiosity for new statistical techniques. His background as a master statistician and as an entrepreneur gives him a unique, business-oriented perspective on data mining and modeling. Prior to Alliant, Malcolm held data analyst and model development positions with Time Warner Cable, Pitney Bowes and Reed Exhibitions.