As much opportunity as AI has created for brands and consumers alike, when used incorrectly it can lead to marketing practices that are problematic at best and discriminatory at worst.
Customer data is one of the most valuable assets for any brand. After all, if you have data on a customer’s previous browsing, buying behavior and demographics, you can predict what they’re likely to buy — plus when they’re likely to purchase and how much they’re likely to pay — and serve them relevant, timely information. Data enables brands to deliver that holy triplet of “right customer, right time, right message.” The shopper enjoys a personalized, expedited journey from browse to buy, while the brand converts sales and builds customer loyalty. A clear win-win.
As brands everywhere grapple with the “new normal,” which includes a new receptivity on the part of many consumers to trying new brands and ways of shopping, knowing what they want is more important than ever. Not surprisingly, there’s a significant push to embrace personalization across a brand’s digital touchpoints. According to market research Acquia recently completed surveying 8,000+ marketers across eight countries, 96% of respondents reported they have seen improvements with customer engagement after personalization initiatives. Half of marketers surveyed also reported seeing increased engagement with their brand and 41% experienced more repeat purchases.
AI is swiftly becoming one of the most important ways brands are maximizing the value of vast amounts of customer data and creating the most personalized digital experiences at scale. In that same study from Acquia, 44% reported adopting AI and ML tools in the past 12 months. Strong uptake isn’t surprising: a 2019 Deloitte survey of 1,100 U.S. advertising and marketing firms considered early AI adopters found that 82% reported positive ROI for their AI initiatives.
As more brands adopt AI to power personalized experiences, bringing efficiencies and improvements to customer experience and engagement, they must stay mindful of AI bias and the potential for it to rear its ugly head if these tools are used in the wrong ways.
Blame the Workman
In the marketing sector, there are a number of unexpected pitfalls to using AI in the wrong way. But as the saying goes, always blame the workman, not the tools, when something goes awry. AI is no exception. As much opportunity as AI has created for brands and consumers alike, when used incorrectly it can lead to marketing practices that are problematic at best and discriminatory at worst.
One such example commonly found in marketing is price discrimination. Also called dynamic or personalized pricing, the practice involves charging different people different amounts for goods. AI might recommend a product for a certain shopper to buy online, with prices automatically adjusted based on data the AI has on the customer. This could include things like IP address, location, browsing history and past purchases, through to more personal information such as occupation, education level, age, race and gender.
For example, a brand may use gender, or the most prominently represented ethnicity in a given area, to determine the type of contract that they offer a prospective client. There’s strong potential to use this sort of detailed data to maximize revenue, but to do so in a manner that is clearly discriminatory. The only way to avoid this is to not utilize demographic patterns.
It’s a bit of a fine line, but a clear one, too.
With Great Power Comes Great Responsibility
Delivering personalized digital experiences based on a visitor’s unique characteristics and browsing behavior has been shown to improve engagement metrics and increase conversion rates. But as tools have evolved, the way we tailor content and the type of content that we personalize has also changed. The only way to do this at the scale and with the kind of accuracy required by brands today is via AI-enabled tools.
The word “tools” is critical here. As with any tool, AI can be used for good or bad. It’s very easy for “bad” or “biased AI” to create problems with customer relationships, so it’s important to remember that AI is simply a tool. It is only as biased as the data upon which it’s built or the way it’s used. To quote Spiderman, with great power comes great responsibility.
The key for brands is to have the right underlying infrastructure — that which enables in-depth analysis of data, and transparent, open means of data collection. Data platforms must stitch together previously siloed databases to create a holistic (but anonymized) view of every customer. This can then be used for analysis and engagement, as well as to engage ethically with new customers and optimize the experience with old. (It also gives them a way to easily get rid of that information should a consumer make the request.)
The notion of “bias in AI” is something I expect to hear more about as more companies adopt the tools and more companies look to deliver personalized digital experiences at scale. As with any new technology, there are learning curves, no matter if you’re a novice developer or one of the world’s largest companies trying something for the first time. The most important thing to recognize is that in many ways, the power of AI sits in the hands of the one using it and to wield that power wisely.