This week in privacy and big data ethics
The (mis)use of big data is not a new theme in public discourse, but in the last week we’ve seen another light cast on the Cambridge Analytica scandal through Netflix’s new viral hit; ‘The Great Hack’, and had a number of Apple employees fired after admitting to listening to private, accidental interactions with Siri (which apparently included drug deals, sex, company meetings and a host of other private moments), and debate is again rife around privacy and ethics in big data. Depressingly, a week not unlike lots in recent memory, where we can see lots of emotional reaction, but few acts of bravery and change across the organizations that shape these issues.
By Jack Gibbon
‘The Great Hack’ brilliantly unpacked the events surrounding Cambridge Analytica, and their manipulation of customer data to exploit the prejudices of voters on social media, for both Trump’s election campaign, and the Brexit referendum. The scandal demonstrated the ability of algorithms to understand and exploit human emotion and behavior, but with the intelligence of algorithms growing at an exponential rate, and an ever-growing pool of new customer data, we’re only scratching the surface of what might be possible in future.
A growing pool of private customer information
Apple employees listening in to personal recordings of accidental Siri interactions is a great example of a new source of data we underestimated. There was a time when being targeted with a Facebook ad based on a private conversation on WhatsApp (imagine being targeted by a funeral home after discussing a terminal illness with a family member) was something that stood out to you, but as more of our personal data is exploited for more relevant targeting and advertising, we’re becoming more and more desensitized to this exploitation of seemingly private information. With the number of connected devices at our fingertips, we are creating a 360 profile of ourselves, fueled by our every interaction with technology, and the Apple scandal this week highlights that our greatest fears around who is listening and analyzing this information are being realized.
New means of interpreting and exploiting this information
With the audience profiling available to Cambridge Analytica via some highly questionable data collection tactics on Facebook, it was claimed that 150 likes was enough to understand your personality better than your parents, 300 was enough to know better than your partner, and eventually with enough likes, even better than you know yourself. With organizations racing for more advanced and effective marketing and user experiences, the ability for marketers to build more holistic targeting profiles and to exploit the needs, desires and vulnerabilities of customers through advancements in algorithm intelligence, we are seeing regulation and appropriate governance struggling to keep up.
GDPR in Europe has shown the power of regulation in managing the collection, storage and utilization of personally identifiable data, but there are an infinite number of other profiles that don’t rely on your email address that can be built using behaviors and attributes collected based on the way you interact with technology. Well known organizations are also filing patents on the use of application of biometric data, and as marketers, before long we’ll be wrestling with the associated ethics; who owns it, how can we keep it secure, how can we avoid bias in application, and many others.
Imagine an AI in your ear (think Scarlett Johansson in ‘Her’) able to interpret your emotional reaction to your surroundings and experiences better than you can yourself. Think about the collection and interpretation of this information at scale, and how useful it could become in helping make your everyday decisions simpler; ‘people with a similar biometric profile prefer this brand of milk, and liked how it tasted with this brand of cereal’. Imagine the possibilities available to brands as they a combination of biometric history and preference data, and how much easier it will become to manipulate and exploit the subconscious of different customer groups, including those most vulnerable in society.
Questions to be asked of everyday marketers
We’re not quite there yet, but we’re frighteningly close. As marketers and practitioners of big data, we need to start asking questions of our everyday practices and to define where we draw the line when it comes to exploiting and misusing the targeting and big data opportunities at our fingertips.
- At what point do our everyday profiling and targeting activities become exploitative of vulnerable need states or unhealthy desires?
- At what point does advancement in user experience design and use of behavioral data become guilty of creating addictive experience?
- At what point does marketing technology move from being helpful in decision making, to manipulating us into making decisions?
We have seen through the Cambridge Analytica scandal how failing to self-regulate and forcing ourselves to answer and be accountable to these questions can have frightening impact on global leadership and policy, and with this technology advancing at an alarming rate, the time for change is now to avoid the most dystopian of potential outcomes.
Looking to brands for inspiration
Patagonia are a great example of a brand pioneering moral fabric; their commitment to slower and more sustainable growth is a blueprint for retailers to adjust their KPIs from acquisition and conversion metrics to focus on sustainability and impact on society. But in a time where some Airlines are encouraging passengers to fly in moderation (KLM; Fly Responsibly), alcohol companies are encouraging customers to drink responsibly (Heineken; Drink Responsibly, Enjoy More), which brands are going to be brave enough to regulate their use of customer insight and analytics to exploit some of the more vulnerable customer needs or biases to sell their product? Few credible current brand examples exist. It is hard to imagine when this will change, and even harder within public companies creating strategy and policy based on focus in driving shareholder value.
I don’t doubt some of the world’s smartest minds are being applied to help solve for this problem, but as day-to-day marketeers and guardians of data, what can we do to prepare and mitigate for the potential negative outcomes of more effective and direct targeting and messaging capabilities to a customer? We can wait for regulation to catch up with the speed of technological enhancement (good luck with that…), or we can begin putting in practices and measures that are own-able and achievable.
Here’s a few thought starters to get you on the right track:
1. Anticipate the most frightening potential outcome
Create a Black-Mirror style hypothesis of what the most dystopian outcome of your everyday marketing or data processes might have on your customers. At this year’s SXSW, Amy Webb introduced a futurist tactic to plan for optimistic, neutral, pessimistic outcomes from technology, so check out her Emerging Tech Report for more.
2. Adopt purposeful KPIs
Put measures in place outside of traditional acquisition, engagement and conversion metrics to ensure you’re focusing on goals that create a healthier brand experience for your customers and for society, and can measure and manage your impact. Talk to customers, stakeholders and those exposed to your marketing efforts to understand how best to tailor metrics around the issues you are hoping to address for them.
3. Review the roles and decision makers involved in the practice of big data
In the future, marketing ethics roles will become commonplace in big organizations to sense check campaigns against a moral agenda. In the meantime, ensure that your workforce is empowered to ask difficult questions and challenge any briefs that can create exploitative experiences for your customers, with a range of roles from diverse backgrounds to account for the needs and vulnerabilities of a wider range of exposed customers and cultures.
4. Be a brand champion for big data ethics
Beyond the moral obligation for your customer and society, there are obvious brand benefits in being a pioneer and champion in the ethical practice of big data in marketing. Encourage ongoing reviews against GDPR principles and build your own proprietary ethics manifestos. If you operate outside of the EU, it’s likely GDPR-style regulation will follow, and from experience helping clients prepare for this transition; act quickly to ensure you can still contact your most loyal customers appropriately.
To adopt an active, rather than passive, approach to ethics in big data, scrutinize the partners in your marketing ecosystem to ensure that they are appropriately handling sensitive customer information and risk assessing their infrastructure and privacy practices to protect customers. The company you keep says a lot about you and keeping company with those guilty of exploitation can no longer be excused, even if they were just a ‘mentor’ (sorry, Brittany Kaiser).
Feel free to reach out to find out more about how we are helping clients navigate disruption in big data and marketing technology, or if you’d like to kick off a discussion around more ethical practices of big data within your organization.