Better science and robust data provide better results. Therefore, to evaluate predictive marketing, you don’t need to be a scientist, just to measure who delivers the best results.
Retail magnate, John Wanamaker, once said: “Half of my advertising is wasted. The trouble is I don’t know which half.” We have great news for Mr. Wanamaker—marketing has taken such a leap forward, that today you can tell not only which half is wasted, but actually predict what will be wasted even before a marketing campaign is even launched.
Unlike in Wanamaker’s time, marketers today would never think about launching a campaign without careful tracking of results and ROI. Marketers still apply the same creativity and imagination as before, but they have more tools to know whether their initial hypothesis about the campaign actually resonates with their target audience.
Predictive marketing takes tracking a step further. Now marketers can use robust data in order to predict what works and who is going to respond to campaigns. The ability to preempt rather than react is a game-changer in a discipline that is consistently struggling to get better performance and ROI. But how do you know which predictive models work best? Like any other marketing initiative—you have to track the results.
Evaluating predictive marketing
It seems that when evaluating predictive marketing, marketers get intimidated by industry terms such as significance, random forest classifiers and neural networks. Let me make a bold claim—marketers should only worry about business results and let the scientists handle the science. The reason is that consistent superior business outcomes typically stem from better science and, therefore, one leads to another.
Predictive analytics should be evaluated like Web design. In the past, marketers had to listen to lengthy explanations why blue implies trust and confidence while green means balance and growth. Today, marketers can simply A/B test two designs and see what works.
Predictive marketing delivers tangible results that can be measured. Therefore, marketers don’t have to evaluate the quality of the model by counting the number of PhDs on the wall. They can test the quality of prediction by simply looking at two models and testing which one provides a more accurate prediction.
Demystifying the “mystery file”
One great exercise that Mintigo does with clients in order to show the power of predictive marketing is the “mystery file”. The client picks a list of leads, without telling us which ones ended up as closed deals. Our job is to discover those deals out of the “blind” list.
For example, as a test for one client, Mintigo had to predict which leads that were generated in 2013 were going to convert in Q1 2014. Results were phenomenal—Mintigo identified 80% of leads that converted. With another large client, Mintigo identified 82% of the leads that converted out of a random list of leads, and also found that only about 15% of the leads are likely to convert in the future.
In short, predictive analytics is like weather forecasting. It doesn’t really matter if you use sophisticated models for your forecast if at the end you get soaked without an umbrella.
The 2 secrets of powerful predictions
These results that Mintigo achieved are not coincidental. In fact, Mintigo has achieved similar results across clients. In addition, when compared with competitors in a head-to-head evaluation, Mintigo has achieved better results across the board, including with clients like SolarWinds and Neustar. There are two secrets for getting powerful and accurate predictions—robust data and smart modeling.
Referring to bad data that leads to bad results, modeling experts say: “garbage in, garbage out”. To provide stellar results, Mintigo scrutinizes its data to create the industry’s most robust and up-to-date database. Mintigo’s database is mined from the Web and provides very high coverage of companies and decision-makers. Mintigo scans billions of Web pages, news sites, databases as well as social networks to collect and process the data. This robust and continuously updated data is then fed into our models.
Modeling is our second competitive advantage. Mintigo has some of the brightest minds in the field of machine learning and data science. These individuals had to overcome major challenges in modeling big data, such as handling missing values or merging Web data with CRM data. Better models combined with better data lead to great predictive results.
Wanamaker didn’t have the technology to evaluate what works. However, with predictive marketing, it’s easy to measure the best predictions. Science matters, but should be left for scientists. Marketers should care about one thing—performance! Great performance is the result of great science.