Digital Marketing Websites

Does Personalized Advertising Work as Well as Tech Companies Claim?

Does Personalized Advertising Work as Well as Tech Companies Claim?
Written by publishing team

Several major tech companies have recently built platforms that claim to educate companies on how best to market themselves and their products online. Examples include Meta for Business (formerly Facebook for Business; “Get step-by-step guidance, industry insights and tools to track your progress, all in one place”), Think with Google (“Take your marketing further with Google”), And Twitter for Business (“Growing Your Business With Twitter Ads”).

These sites are very attractive. They provide small and medium-sized businesses with an abundance of really useful information on how to do business online, and of course, they offer a variety of advertising tools and services designed to help those businesses boost their performance.

All of these websites have the same basic goal. They want you to understand their tools and services as powerful and highly personalized – and they want you to invest your marketing money in them.

It’s not as simple as it seems

Facebook is perhaps the most compelling of the three companies mentioned above. In recent weeks, the company has been running ads telling all kinds of inspiring stories about the small businesses it’s helped with their new services. You may have seen some of these ads in airports, in magazines, or on websites. My Jolie Candle, French Candle Maker, Find[s] Up to 80% of their European customers are via Facebook platforms.” Chicatella, the Slovenian cosmetics company, “attributes up to 80% of its sales to Facebook apps and services.” Mamie Poppins, a German supplier of baby gadgets, “uses Facebook ads to increase up to half of its revenue.

This sounds impressive, but should companies really expect such big impacts from advertising? The truth is that when Facebook, Google, Twitter, and other big tech companies “educate” small businesses about their services, they often encourage incorrect conclusions about the causal effects of advertising.

Consider the case of our consulting client, a European FMCG company that has made its brand for many years around sustainability. The company wanted to explore whether online advertising making a claim of suitability might actually be more effective than advertising claiming sustainability. With the help of Facebook for Business, he did an A/B test of the two ads and then compared the return on ad spend between the two conditions. The test found that the yield was significantly higher for the sustainability declaration. Which means that’s what the company should invest in, right?

Actually, we don’t know.

There’s a fundamental problem with what Facebook does here: the tests it offers under the rubric of “A/B” tests aren’t actually A/B tests at all. This is incomprehensible, even by experienced digital marketers.

So what’s really going on in these tests? This is one example:

1) Facebook divides a large audience into two groups – but not everyone in the groups will receive the treatment. This means that many people will not see an ad at all.

2) Facebook starts selecting people from each group, and offers a different treatment depending on which group the person was sampled from. For example, the selected person from group 1 will receive a blue ad, and the selected person from group 2 will receive a red ad.

3) Facebook then uses machine learning algorithms to improve its selection strategy. The algorithm might learn, for example, that younger people are more likely to click on a red ad, so it will then start serving that ad more to younger people.

Do you see what is happening here? The machine learning algorithm that Facebook uses to improve ad serving actually invalidates the design of A/B testing.

This is what we mean. A/B tests are built on the idea of ​​random assignment. But are the assignments in step 3 above random? No, this has important implications. If you compare the treated people from group 1 with the treated people from group 2, you will no longer be able to draw conclusions about the causal effect of the treatment, because the treated people from group 1 now differ from the treated people from group 2 in more dimensions than just the treatment. Processed people from group 2 who were shown the red ad, for example, would end up being younger than the processed people from group 1 who were shown the blue ad. Whatever that test is, it’s not an A/B test.

It’s not just Facebook. Think with Google suggests ROI-like metrics as well causal, when in fact they are just associative.

Imagine that a company wants to know if an advertising campaign is effective in increasing sales. The site suggests that the answer to this question involves a straightforward combination of basic technology and simple math.

First, you have to set up conversion tracking for your website. This allows you to track whether customers who have clicked on an ad have proceeded to make a purchase. Second, you calculate the total revenue from these customers and divide (or subtract from) your advertising expenditures. This is your ROI, and according to Google, it is “the most important metric for retailers because it shows the true impact of the Google Ads program on your business.”

Actually, it is not. Google analysis is flawed because it lacks a point to compare. to really Know if advertising is profitable for your business, you need to know what revenue would have been generated in the absence of advertising.

Twitter for Business offers a kind More involved proposal.

First, Twitter works with a data broker to access cookies, emails, and other identifying information from the brand’s customers. Twitter then adds information about how those customers relate to the brand on Twitter – whether they click on the tweets promoted by the brand, for example. This should allow marketing analysts to compare average revenue from customers who have interacted with a brand with average revenue from customers who have not. If the difference is large enough, the theory goes, it justifies the advertising expenditure.

This analysis is comparative, but only in the sense of comparing apples and oranges. People who buy cosmetics regularly don’t buy them because they see promoted tweets. They see the tweets promoting cosmetics because they regularly buy cosmetics. In other words, customers who see Promoted Tweets from a brand are very different people from those who don’t.

causal confusion

Companies can answer two types of questions using the data: they can answer prediction questions (as in, “Will this customer buy?”) and causal inference questions (as in, “Will this ad make this customer buy?”). These questions are different but easy to confuse. Answering causal inference questions requires factual comparisons (as in, “Would this customer have bought without this ad?”). Intelligent algorithms and digital tools created by major tech companies often offer apple-to-orange comparisons to support causal inferences.

Big tech companies should be well aware of the distinction between prediction and causal inference and how important it is to effectively allocating resources — after all, they’ve hired some of the smartest people on the planet for years. Targeting potential buyers with ads is a pure forecasting problem. It does not require causal inference, and it is easy to do with today’s data and algorithms. Convincing people to buy is much more difficult.

Big tech companies should be commended for the useful materials and tools they provide to the business community, but small and medium-sized companies should realize that advertising platforms pursue their own interests when they provide training and information, and that these interests may or may not align with small companies.

Editor’s note (12/16): Updated headline on this piece.

About the author

publishing team

Leave a Comment