Model Development is a Team Sport

Model Development is a Team Sport

Modern marketers are no stranger to using modeling to help them understand, target, and extend their audiences. What modeling or analytics means to people though may differ by their company or which channel their team is focused on. Traditional direct marketers have leveraged customized predictive modeling for many years, while digitally focused teams have leaned largely on the out-of-the-box modeling tools within their DSPs, DMPs, CDPs and social platforms. Whichever camp you fall within, the growing need to harness 1st, 2nd and 3rd party data for competitive advantage means that more bespoke and forward-thinking solutions will be required for future success.

While hearing the term “modeling” or “analytics” may invoke thoughts of a data scientist developing algorithms in a quiet room somewhere, I promise you there is much more involved. As audience modeling becomes more customized, all business teams play a role in the development of successful solutions, even non-technical teams. This does not mean workflows have to be more complex or have too many cooks in the kitchen, but the best outcomes will be realized when all sides understand their role.

The Model Dev Teams

Below is a quick breakdown of the teams that influence the model development process and how they impact the outcomes. These groups may all be a part of one company, or represent a mix of in-house and external teams.

Data Integration

Integration specialists are the front line of a successful solution. They ensure that data flowing in from internal and external sources is transformed, cleansed and matched appropriately. Additionally, they can call out any potential issues with the data sets and provide useful metrics or reporting for all other teams. As consumers have fragmented across channels, data integration has become more complex, but when done accurately it creates powerful analytic data sets. The data integration team may also be responsible for rolling-up granular data into useful predictors. The accuracy of this aggregation, and the types of predictors created, can make or break a modeling project.

Data Analysts

Analysts are an extension of the data science and business teams, using their understanding of the business objective(s) to compile the best data for model development. They prepare the development samples, keeping an eye out for suspicious or insufficient counts, incomplete data, unexpected match rates, data recency, undefined dependent variables and more to ensure a successful build.

Compliance/Data Governance

Data Governance teams provide guidance on the types of data that is available for specific use cases. They are an essential partner to all teams, especially when building solutions that need to adhere to specific regulations such as FHA/FLA. Consulting with data governance teams early will provide assurance that the modeled solutions are compliant and applied ethically.

Account Teams/Strategists

Business teams communicate the objectives of the campaign or initiative and serve as a central point of contact between all teams. The more they share about the target audience, what has and hasn’t worked historically, current state of their business/marketplace and any other influencing factors will arm the analysts and data scientists with the knowledge they need to build the best solution. And if you are on one of these teams, know that you can also influence the data science team by sharing different ideas during consultations.

Data Science/Analytics

Ultimately the team that builds the models. Quality data scientists seek out opportunities to speak with all teams to understand the data, the product/offer, objectives, campaign creative and success metrics. Along the way they will provide transparent details about their approach, identifying areas that require additional attention, and recommend the best path forward. This consultative approach reinforces cross-team ownership, ensuring the strongest possible solution that aligns with the goals outlined upfront. The data will lead the way, but if a data scientist does not factor in all other team inputs, there will be missed opportunities.

Campaign Managers/Media Buyers

The teams activating the data assess performance in the real-world. They are a critical piece of the feedback loop that influences future model iterations. They can provide a different perspective on where things worked/didn’t work, and call out other interesting factors that may have influenced performance.

As you can see, modeling is more than just a data scientist manipulating data they are provided. If you are someone who interfaces with analytic teams, hopefully this provides you a different perspective on your role in the process. Remember, teamwork makes the dream work!

Interested in learning more about Alliant’s consultative model development process? Reach out to us and we’ll be in touch!

ABOUT THE AUTHOR

Malcolm Houtz, VP of Data Science

As head of Alliant’s data science team, Malcolm balances a high volume of complex model development and analytic projects every month — while maintaining a laser focus on innovation and excellence. Malcolm is a critical thinker with an insatiable curiosity for new statistical techniques. His background as a master statistician and as an entrepreneur gives him a unique, business-oriented perspective on data mining and modeling. Prior to Alliant, Malcolm held data analyst and model development positions with Time Warner Cable, Pitney Bowes and Reed Exhibitions.

 

Why Data Co-ops May be a Path Forward for Identity Solutions

Why Data Co-ops May be a Path Forward for Identity Solutions

Originally published by WARC in April, this piece is more relevant than ever with Alliant’s recent announcement of its support of Unified ID 2.0. We hope Members are actively considering how to connect to these ID solutions and how Alliant and your DataHub membership can help.
Lately it feels as if the entire digital advertising landscape shifts every time one of Google’s product leads dashes off an early morning blog post. The latest has the industry fretting, with Google’s proclamation in March that it will not build or support alternative identifiers once Chrome stops supporting third-party cookies sometime next year . Meanwhile, Apple said last week that its IDFA (Identifier for Advertisers) changes – which make it opt-in – are kicking in this week, causing even more consternation.

With every big tech announcement from Google, Apple or Facebook, the industry looks at what may be lost, and rightfully so. However, brands and agencies also need to take a close look at what they’ll still have going forward. Walls may grow higher, identifiers may go, but brands will always have access to their first-party data, and that is incredibly valuable.

They may take our Unified ID but they will never take our 1st party data!

Having first-party data sounds great, but the challenge is what brands do with that data. To ensure that a first-party data asset not only enables brands to work with changing platforms, but also is less dependent on any one platform or solution, they must enhance and connect their first-party data to a collaborative environment. Joining a data cooperative can offer such connectivity, providing access to complementary second-party data and analytics beyond a brand’s first-party asset. To understand the value of a data cooperative, it’s worth agreeing on a common definition of second-party data. Winterberry Group defines it as data that is shared in a dedicated environment with a clearly defined set of permissions and rights set between each of the parties and, most often, the third-party provider managing the environment.

Full disclosure, at Alliant we run a data co-op, which anonymizes and aggregates brands’ first- party data into a second-party data asset. In this process, similar to other data co-ops, a brand’s first-party data is first and foremost permissioned by the consumer, and then anonymized and stripped of personally identifiable information (PII) in a dedicated environment, protecting consumer data and each brand’s business information. The result is a privacy-compliant, stable, second-party asset maximized for analytics and enrichment capabilities.

Co-ops have been around for decades in the direct marketing world and have evolved to fuel advertising as brand dollars have flowed to the digital realm. While third-party data suppliers and aggregators took off in the online world, co-ops steadily remained a way for brands to maximize their first-party data intel across channels.

DMPs (data management platforms) and CDPs (customer data platforms) have grown in popularity over the years, yet neither has taken on a core business function like creating a unique second-party data set aggregated across multiple brands to complement a brand’s existing first-party data.

Given the inherent scale limitations of first-party data, the absolute requirement of privacy compliance and, most recently, Google and Apple’s decisions to kick identifiers to the curb, co-ops now have even more value than ever before.

Data co-ops have staying power

Launching in the early ’90s, data co-ops have proven their ability to adapt to industry shifts, including the explosion of digital media and the constantly changing world of compliance standards. Their greatest value going forward may be in aggregating properly permissioned PII, and then normalizing and anonymizing that data to make it usable. Beyond simply organizing and enriching data, this co-op data transformation creates a stable, analytic-oriented second- party data asset that fundamentally differentiates a co-op from a CDP and makes it an untapped tool in the marketer’s arsenal.

The co-op benefit

With an increase in digitally-native DTC brands, co-ops represent a collaborative way to turn digital signals and first-party insights into actionable data through partnership with other like-minded marketers. The concept of sharing first-party data, especially in the current climate, may be hard to swallow for many brands, and even a barrier to considering a data co-op all together. However, the possibility to safely enrich first-party data is a necessary step for brands and especially publishers.

While advertisers are obvious beneficiaries of co-op data, longtail publishers with limited first-party data will benefit from joining forces in a co-op setting to make the most of the little data they do have. Creating an ad network across publishers gives them more control in working closely with advertisers, instead of full defaulting to Google to handle their ad revenue.

Pooling data will also help manage the reduction in credible reach and targetable audiences that will happen in 2022 with the loss of the cookie. Collaborating across multiple publishers provides incremental reach to balance that issue. Connecting insights across brands and publishers allows advertisers to target another brand’s first-party data, modeled for their own brand’s offering, across the inventory from a publisher co-op partner. On paper, that sounds much more desirable than the limited identifiable universe that will come into play once Chrome stops supporting third-party cookies.

Unlocking value on advertising’s bleeding edge

Perhaps the greatest benefit of co-ops goes back to their origins serving direct mail marketers. Today, this gives them the ability to aggregate data across channels, helping to bring offline insights into the online world. This solves a number of issues by:

      • Providing valuable offline consumer behavior data as the amount of online behavioral data shrinks due to the loss of the cookie;
      • Creating a logical extension for offline brands to enter online efforts; and
      • Easily translating to offline marketing insights for omni-channel campaigns.

This last point is perhaps the most enticing when facing the future of advertising. First-party data and the collaboration within a co-op are huge assets within other walled gardens like Amazon, retail networks like Target, and of course in TV, where cord-cutting has accelerated due to the pandemic. These are emerging frontiers of advertising, where Google’s decisions are not going to have the same impact. As these become more valuable ways to engage with consumers in the very near future, brands that can use their first-party data, and co-op insights to target within these channels will be primed to seize the opportunity.

It’s critical for brands to grasp the implications of the potential loss of third-party insights, but it’s just as important that they understand why it matters where those insights come from. What’s more, advertisers need to understand where the loss of cookies matters and where it doesn’t.

Co-ops have survived and thrived, and their signals will remain reliable amid cookie deprecation and translate into new frontiers like CTV and retail networks.

ABOUT THE AUTHOR

Matt Frattaroli, VP Digital Partnerships

With over a decade of experience in adtech and eCommerce, Matt leads Alliant’s digital team in expanding strategic partnerships with the industry’s largest digital platforms and agencies. Matt’s professional roots are deep in the digital marketing space, having built multiple eCommerce and MarTech companies, including ChoiceStream. In 2015 Matt was recognized by Digiday as a Signal Award Finalist in Data Management and Marketing, for his work with consumer polling validation.

 

The Power in Marketers Understanding Predictive Modeling Methodologies

The Power in Marketers Understanding Predictive Modeling Methodologies

Marketers are surrounded by predictive modeling and machine learning. Whether it’s the underlying algorithm(s) powering campaigns in DSPs, suggested subject lines in marketing automation platforms, or custom models built specifically for various business objectives, predictive modeling is everywhere.

While a marketer doesn’t have much control over the stock algorithms in their platforms, they do have a say once they start entertaining custom solutions. And it wasn’t until recently that anyone outside of analytics-related roles really questioned what types of algorithms were being used in their solutions. I applaud this curiosity. Non-technical marketers upping their algorithm game will help with solution evaluation, foster more strategic discussions with a broad group of teams, and even impress a few teammates. But to really make an impact, marketers should understand if and how these different approaches impact outcomes for their brands.

Before digging into if they impact outcomes (spoiler – they totally do), let’s do a quick crash course on some of the different algorithms. First, algorithms fall into one of two main categories, supervised or unsupervised. Supervised learning methods aim to find a specific target which exists in the data. Conversely, unsupervised learning methods do not require a specified target, rather they make observations of the data and group together similar points.

Some popular machine learning methodologies include:

  1. Logistic Regression: Estimates the log-odds of the probability of binary response (bought/didn’t buy). A classic and commonly used algorithm, but one that doesn’t capture more complex, nonlinear effects or interactions among variables.
  2. Decision Trees: Predicts the likelihood of an action in an upside-down tree pattern. The algorithm chooses one predictor and its splitting point which results in the purest nodes below. This method can identify distinct groups, capturing nonlinearity and interactions. There are several variations, including:
    1. Random Forests – A collection of many (hundreds) of decision trees. Each tree is trained independently, considering a small subset of randomly selected predictors at each branch. Results of the trees are averaged together, providing a very stable solution.
    2. Gradient Boosted Trees – Instead of building each tree independently, this method builds a succession of trees, each trying to improve the results of the previous tree. This is a very strong method, requiring less code and providing high accuracy.
  3. Support Vector Machines (SVM): Separates behaviors by constructing a hyper plane which slices through the data and provides the best separation between the two groups. SVMs produce significant accuracy with less computation power.

This is by no means an extensive list but provides a good starting point. It is most important to grasp that each method aims to solve the same question or problem in slightly different ways, therefore providing slight differences in the predicted outcomes. In most cases, each will iteratively refine itself, until the model can no longer be improved. With a baseline understanding, skilled data scientists can further guide you on the nuances of each when it comes time to build your custom solution.

At this point you may be wondering, do these differences matter? The short answer, yes. So, the question then shifts to how does a marketer ensure the best outcome? Understand that a data scientist won’t always know upfront what will work best, but discovery will light the way. The algorithms chosen by a modeler are influenced by two primary considerations; 1. What are we trying to predict/what question are we asking? And 2. What data do we have access to? The answers to each of these may limit or expand the options available to them. Modelers may have a gut instinct or belief about which methodologies might work best, but the data may tell a different story than anticipated. The marketers understanding of the business objective, the marketplace and more can empower data scientists to make more informed decisions throughout the development process.

Since methodologies matter, and the data holds the key, often the best approach is to explore more than one approach or algorithm. The Alliant Data Science team uses advanced workflows to simultaneously build multiple models using different methodologies. This approach provides several benefits:

      • Reduced subjectivity: the ability to see predicted performance, model fit metrics, and more across each method allow the team to see which will strongest
      • Less limitations: can bring together supervised and unsupervised methods, like using clustering and dimension reduction to inform subsequent supervised learning steps
      • More creativity: data scientists can test new and interesting ways to combine multiple methodologies into a unique solution

Whether you start with just one method or test an ensemble approach like the one introduced above, your marketing goals, data and infrastructure will illuminate which is best for you. Maintain a human element and partner cross-functionally to obtain the best results, gathering input and alignment across business, marketing, data science and technology teams.

Interested in learning more about how advanced custom modeling can improve your multichannel marketing? Feel free to reach out to us and our team will be in touch!

ABOUT THE AUTHOR

Malcolm Houtz, VP of Data Science

As head of Alliant’s data science team, Malcolm balances a high volume of complex model development and analytic projects every month — while maintaining a laser focus on innovation and excellence. Malcolm is a critical thinker with an insatiable curiosity for new statistical techniques. His background as a master statistician and as an entrepreneur gives him a unique, business-oriented perspective on data mining and modeling. Prior to Alliant, Malcolm held data analyst and model development positions with Time Warner Cable, Pitney Bowes and Reed Exhibitions.

 

5 Reasons to Maintain a Human Element in Marketing Data

5 Reasons to Maintain a Human Element in Marketing Data

Analytic and modeling features permeate the many SaaS platforms in the marketing industry and advances in technology have made machine learning tools widely accessible. As the industry leans into data-driven marketing and predictive modeling, marketers often find themselves relying heavily on automated analytic tools without realizing the potential hazards. Whether starting off with a “magic box” lookalike modeling solution, or implementing advanced analytic workflows leveraging multichannel data, maintaining a human touch is critical to success.

 

Having a human element will also empower you to build and test several different solutions, ultimately choosing the best.

A useful analogy for approaching automated analytic tools handle and activate data similar is to consider how a Tesla owner might use some of the advanced features of the vehicle. Auto-pilot is a very real feature that can assist them down the road, and fully-automated self-driving is on the horizon, but for now the driver still need to be present to helm the wheel. It may be tempting to look away for an extended time, but keeping human eyes on the road is a better guarantee of a safe arrival. Similarly, a ‘set it and forget it’ approach to data activation will never deliver the marketing results we desire. Expert data scientists and strategists can add unmatched value to your marketing execution — and ultimately, to your bottom line.

Whether you have in-house data experts, lean -n partners, or are wondering if you should add a data scientist to your team, here are five important reasons why you need human involvement in your data workflows:

Reason #1: Algorithms aren’t consultative

Machine learning tools are great at ingesting enormous quantities of data and making sense of it. However, an algorithm can only make assessments based on the data it is supplied. ML tools can’t look outside of that data set and consider evolving regulations or cultural changes. They can’t help you develop the right objectives or success metrics. And they won’t be able to look at the impact of your unique business processes — a data scientist who understands the nuances of your business, from product, to creative and compliance, will be able to maximize the value of the data.

The consultation should start before a model is even estimated, beginning with defining the model development data set and the appropriate dependent variable for the chosen KPI. Aligning available data sources like past campaign responders or various lead streams with the campaign objective will help narrow the development data set. Consulting with a data expert can help you isolate the right dependent variable while also providing guidance on what may happen when you choose one over another. For example, how modeling for higher LTV customers will lower short-term response rates. Data scientists can also add value by suggesting a screen — or secondary model for other lagging indicators. Doing so can help balance results, protecting one KPI from tanking while another thrives. Without a human element to your analytics, the opportunity to have strategic conversations may be missed.

Reason #2: Creativity isn’t reserved solely for Marketing teams

Plug and play solutions are one-size fits all and often lack imagination when it could be beneficial. What if you don’t have the exact data points in your model development sample to create the desired dependent variable? If you only have a “magic box” modeling tool, then you’ve hit a wall and are out of luck. A qualified analyst can evaluate the situation and potentially construct a proxy using available data. While having rich model development data is ideal, a creative approach can push you forward when you would otherwise be stuck.

A human element can also empower you to build and test several different solutions. Various test cases can evaluate different algorithms, dependent variables, screens, input data sets and more. Quick and easy modeling tools don’t synthesize new ideas or applications. If you are in need of something different or additive to an existing solution, rerunning within the same template is not going to generate different results.

Reason #3: QA won’t happen by itself

Remember the old adage, garbage in – garbage out. If flawed input data flows into the system it will be subject to all sorts of issues, and essentially rendered useless. Worse, bad data may go unnoticed and any models generated would be sub-optimal. Having a team to manage data hygiene and identify potential errors will save you many headaches during development and execution. This is especially true if you are matching data sets from different databases or silos within your organization. Being hands-on with the data early on will also provide an opportunity to evaluate which data sets will drive the best results, and which might just be noise.

Similarly, having a team that monitors and validates model results is necessary as well. This might be a new concept for those that have only used platform-based modeling where you don’t have a chance for QA. Even with clean and correct input data, it is possible for things to go awry in processing. A trained eye will be able to evaluate QA reports to validate model outputs, and further investigate any outliers or anomalies. Let’s say you are modeling for digital buying behavior, but you decide to include customers who had also ordered offline in the model development sample to bolster the seed size. A human would assess if the model became too biased towards the offline behavior and adjust as needed. All of this will provide you further assurance when it comes time to activate.

Reason #4: More advanced models require analytic expertise

Lookalike modeling is a powerful tool and one that Alliant often deploys for clients. But with constant evolution of technologies and strategies, there are many powerful new data analysis and modeling techniques available. As your business evolves you will likely want to take advantage of these and begin predicting performance for specific KPIs, or leveraging ensemble methods. For instance multi-behavioral can optimize for multiple consumer actions. Innovative applications like these require more than a simple upload of data into a lookalike modeling solution.

Reason #5: Things won’t always go as planned

If 2020 has taught us anything, it is that you can never be 100% sure of what will happen once you go live. Having resources available to assess the situation and make adjustments on the fly can turn potential errors into positives. In uncertain times it is unlikely you will have a data set to assist with prediction. It is ultimately up to humans to figure out how to adjust — and bring machine learning tools along for the ride.

Interested in learning more about how you can partner with Alliant’s data scientists to build custom data solutions? Contact us at any time! Our team has been on an analytic evolution, enabling the data scientists to take predictive modeling to new places and ultimately creating stronger solutions for our partners.

ABOUT THE AUTHOR

Malcolm Houtz, VP Data Science

As head of Alliant’s data science team, Malcolm balances a high volume of complex model development and analytic projects every month — while maintaining a laser focus on innovation and excellence. Malcolm is a critical thinker with an insatiable curiosity for new statistical techniques. His background as a master statistician and as an entrepreneur gives him a unique, business-oriented perspective on data mining and modeling. Prior to Alliant, Malcolm held data analyst and model development positions with Time Warner Cable, Pitney Bowes and Reed Exhibitions.

Revolutionizing Marketing Campaigns with Machine Learning

Revolutionizing Marketing Campaigns with Machine Learning

First Published by SAS, January 2020

In the start-up days of Alliant, the original data science team held SAS as the gold standard for analytics. They chose it to be the critical tool in developing regression models, core to Alliant’s solutions. Over time, SAS also became embedded in Alliant’s production processes. To meet the expanded workloads and requirements of modern marketing, Alliant migrated to the SAS Viya platform, upgrading from SAS 9.4. As our investment in leading machine-learning tools has grown, so has our partnership with SAS.

Partnering with the Alliant Data Science and Development teams, in early 2020 the SAS Customer Advocacy team published a feature on Alliant’s journey from 9.4 to Viya.

For marketers, identifying qualified prospects is critical to reaching consumers. But many companies don’t have the database or the analytical firepower to compile effective audiences. They end up blasting campaigns to audiences without taking preferences – or likelihood to respond – into account just to keep pace with their high-demand environment.

That’s where Alliant, a data-driven audience company, can vastly change a client’s marketing approach – and results. Whether you want to target millions of prospects for a campaign or purge your first party data of ineffective targets, Alliant uses machine learning algorithms powered by SAS® Viya® to quickly create bespoke marketing audiences from a database of 270 million consumers.

Lightning-fast data prep

At Alliant, creating high-performing marketing audiences for clients begins with data management. This is where Bill Adam, Alliant’s Senior Vice President of Data and Technology, enters the story. Adam and his team are responsible for all data assets at Alliant.

“Our data comes in every flavor you can imagine,” Adam says.

Alliant, a cooperative database, integrates data from a large network of marketing partners. As each partner updates its customer data monthly, Adam and his team load it into Hadoop, then use an ETL process to retrieve predictors and pass them into SAS Viya as structured data sets for analytics. Each data set includes anywhere from 10,000 to 14,000 candidate predictors for use in the modeling process.

Before installing SAS Viya, this process was cumbersome. The addition of bountiful data sources, along with a rapidly growing base of cooperative members, was ballooning data volumes from millions to billions of records. The previous data environment simply wasn’t strong enough to handle the new reality. At the same time, clients were requesting faster turnaround times.

“We had a need for speed, so naturally we called SAS,” Adam says. “After migrating to Viya, we now have ample computing power to process gigantic data sets much faster.

“In the world before Viya, say we had a universe of 100 million people for a campaign – we’d have to reduce our workload to 30 million to meet a client service-level agreement,” Adam explains. “In the Viya world, we’re running 10 times faster, so we can segment our entire universe to create better audiences.” On top of this, Alliant is finding cost savings by creating efficiencies in data prep work.

Predictive modeling with machine learning

Work continues as the Data Science team – led by Malcolm Houtz, Vice President of Data Science – constructs predictive machine learning models to segment and score large data sets into valuable marketing audiences.

Previously, Houtz and his team relied solely on logistic regression to identify good marketing targets. But clients were asking for more and more prospects. And faster. As Houtz explains it, performing more logistic regressions on a data set only reduces the number of prospects for a particular campaign. He needed to run different algorithms to cast a wider net.

SAS Visual Data Mining and Machine Learning allows Alliant to run multiple machine learning algorithms at once. While continuing to include traditional logistic regression, Alliant has transformed its modeling operations by simultaneously applying algorithms such as neural networks, support vector machine, gradient boosting and random forests to the same data set.

Alliant now can produce larger audiences by including every qualified name from each algorithm – or higher-quality audiences by including only the prospects that all five algorithms qualified as top performers. The company also can apply algorithms specifically targeted to meet clients KPIs, whether that’s response rate or customer lifetime value.

A better customer experience

Alliant accesses a growing library of more than 1,200 models to provide audiences for marketers every day – in any channel – from direct mail and email to programmatic and Addressable TV. These models are on demand or custom built for clients across industries to help optimize campaigns, provide real-time scoring of data sets, or drive brand awareness.

“The speed is incredible for this volume of data,” Houtz notes. “SAS helps Alliant deliver models in a quarter of the time of traditional workflows and shorten processing times by 85%. This boosts productivity, increases client engagement and generates more income.”

Houtz shares one example where a large client needed a fast turnaround-time on scoring 200 million prospects. “We couldn’t have done it previously,” Houtz says. “But with SAS Viya and our machine learning algorithms enabled with Viya, we were able to turn that project around in one day. The client was delighted, and that’s very good for our business.”

Keeping a competitive edge

“The data and marketing technology industry has evolved at a rapid pace,” Houtz says. “So we have to deliver stronger and stronger models, and SAS helps us do that.”

The implementation of SAS Viya and products has helped Alliant stay on the forefront of industry change, developing faster processing and scalable architecture, enabling Alliant to keep a competitive edge. And, perhaps most importantly, these updates also have provided the structure to prepare and manage data assets to meet and exceed new data compliance requirements, such as the California Consumer Privacy Act.

Shortly after the publication of the story, Alliant was invited as one of four customers to present at the recent SAS Analyst Conference, providing a platform to share its work on a successful migration of SAS Viya solutions into a cloud environment. The journey with SAS will continue later this year with additional opportunities to present the outcome of Alliant Cloud solutions as well as a future improvement roadmap and the SAS Global Forum.

 

ABOUT THE AUTHOR

Bill Adam, SVP, Database Development and Technology

Bill leads Aliant’s production and technology teams and is the architect of the Alliant DataHub. Prior to Alliant, Bill held senior level positions at emerging businesses and marketing companies including News America Marketing, Edible Arrangements, Ikan Technologies, Grocery Shopping Network, Zadspace and others. His development credentials include the first integrated card payment system, first patented and personalized digital savings circular and the first fully automated print and apply system for on-pack shipment advertising. He has two graduate degrees from Boston University, an MBA and an M.S. in information systems. He is the Alliant office prankster and true believer in the power of ordering pizza for your team.
Where the Cloud and Big Data Analytics Meet

Where the Cloud and Big Data Analytics Meet

Changing Marketing Data Analytics and Management in 2020

You are the master of your Big Data strategy – your first party data is collected and organized in the Alliant DataHub, and possibly in another coop or even a CDP. You harness it with powerful technology and sophisticated machine learning, replacing the manual segmentation strategies of another era and improving campaign ROI.  But as you strategize for a new year, what more might be on your optimization wish list?

For many marketers, the answer is finding rich new data sources that are additive to current strategies. You want to remain one step ahead, developing more predictors to push ROI higher. Yet, accessing new data sources can be somewhat daunting, especially when trying to keep compliance top of mind.

Alliant is confident that turning to cloud-based analytic solutions and only activating data sets that will drive additive results is the next big trend in big data.

Often it requires licensing from multiple companies.  Compliance and IT battles might ensue over bringing new data in-house in a compliant manner – which in the current GDPR/CCPA climate requires an additional level of transparency and reporting.  After all the back and forth, marketers must then justify the cost of the data, which can be challenging when you use only a few hundred models, but pay for thousands of predictors.

Continued access to a large pool of data for ongoing analytic improvement is ultimately worth the investment, but perhaps there is a better path to big data enlightenment. Enter the cloud.

Seasoned marketers and Alliant DataHub Members can harness the convergence of the cloud and Big Data for a competitive advantage heading into 2020. The cloud makes it possible to work with large data sets and power continuously improved analytic strategies for a fraction of the cost and minus the headaches. A secure, cloud-based analytic environment would ideally be managed by a data partner, who would only pull relevant data from their sources into the cloud for development, requiring minimal resources to access data for model development and business analysis.   It is then possible to easily test hundreds or thousands of new predictors – seeing what is additive to your models and only licensing the variables that work best for live production applications, and controlling costs.  Additionally, managing data used for an analytic work product in the cloud reduces the need for consumer reporting requirements, as you never have to onboard a full set of 3rd party data under your purview.

Alliant is confident that turning to cloud-based analytic solutions and only activating data sets that will drive additive results is the next big trend in big data.  Interested in learning how we can develop a custom “analytics lab” for your team in 2020? Let’s talk!

ABOUT THE AUTHOR

Bill Adam, SVP, Database Development and Technology

Bill leads Aliant’s production and technology teams and is the architect of the Alliant DataHub. Prior to Alliant, Bill held senior level positions at emerging businesses and marketing companies including News America Marketing, Edible Arrangements, Ikan Technologies, Grocery Shopping Network, Zadspace and others. His development credentials include the first integrated card payment system, first patented and personalized digital savings circular and the first fully automated print and apply system for on-pack shipment advertising. He has two graduate degrees from Boston University, an MBA and an M.S. in information systems. He is the Alliant office prankster and true believer in the power of ordering pizza for your team.