A lot has been said recently about GDPR and how it will change our legal responsibilities when working with data. In fact, we’ve already discussed it on this very blog. Last week’s sudden explosion of interest in Facebook after Cambridge Analytica harvested data from millions of user on the social media platform has highlighted something that often goes overlooked. It isn’t just a matter of following the rules and staying inside the dotted lines. If people feel that you’ve acted unethically with their data, they’ll be mad at you and your brand anyway, and rightly so. This has consequences far beyond your legal liabilities.
So, here are four questions you should be asking yourself before working with data on a campaign. They might not mean much from a legal standpoint, but will hopefully help you look in the mirror with a clearer conscience.
Did this data come from a known and reliable source?
One of the first rules of any analytical process is to ensure that the data you use means what you think it means. If it doesn’t you are effectively doomed before you start. Do you know where the data came from originally? Have you gone to the original research source, or have you merely picked them up from the end of a long Chinese whisper chain of caffeine-crazed tech journalists?
It doesn’t matter whether you are writing a whitepaper that will be subjected to intense and protracted scrutiny, or ‘merely’ supporting an argument on a PowerPoint slide (that might later result in a decision that sees hundreds of thousands of <insert currency here> being inadvertently lit on fire). When you’re dealing with any kind of data, ask yourself the following questions:
- Does the source for this data have an agenda? (Hint – yes)
- Is the source of this data legitimate or are they actually a bunch of muppets?
- Did they misspell ‘data’?
I am being flippant here, because a serious discussion of the considerations of this stuff deserves more space than we can give it here. For now, though we need to acknowledge that without spending the time and effort to establish data is valid, any analysis you do with it might as well be immortalised in crayon.
Could people be harmed if I use this data in the wrong way?
In other words: “would I be comfortable with this ad being served to my parents?”. It’s easy to get caught up in KPIs and forget that advertising fundamentally means selling things to real people.
I’m not suggesting that we should be constructing intricate risk models to completely eliminate the possibility that Reggie, having viewed a particularly persuasive M&S retargeting ad, will spend his last £5 on a cosy set of earmuffs, miss the bus home, and contract pneumonia (with very warm ears) in an unexpected flurry of the wrong kind of snow on Chippenham High Street. It’s reasonable for customers to take at least some responsibility for their decisions.
But neither should we pretend that some segments aren’t particularly vulnerable. If you find yourself targeting elusive high spending ‘whales’ or using complicated psychometric tricks to manipulate your customers into spending, this could be a clear sign that what you’re selling may not be in your target's best interest. Elderly people, gambling addicts, people with mental health problems, or those struggling with financial troubles all come under the list of people who we should be advertising to (if at all) in very sensitive ways. Because let’s face it – just how much is that fist bump from your line-manager Chad in the monthly “ROI-nd Table” meeting really worth?
Do I trust this partner with my data?
“Do you really want to see what you’ll find?” was reportedly the response of a Facebook executive when concerns were raised internally about the use of data by Cambridge Analytica. This is a sentiment that has no place when working with data (or in many other aspects of marketing). It does not absolve you morally of responsibility for the actions of others, and it will not absolve you legally.
If you are going to give people access to your data then you need to know:
- Whether you have permission to share it with them, and under what conditions
- What they need to use it for
- How are you going to transfer it safely to them
- Where they are going to store and protect it.
- Who will have access to it within their organization
- How long they are going to keep, and how they will dispose of it
- That they aren’t going to share it with anyone else, (without permission and similar due diligence process from you)
Essentially, you need to be aware of all of the same factors as when considering your own internal data handling, otherwise you may find yourself disclosing the breach of your customer’s data as a result of someone else’s mistake.
Do I believe in this conclusion?
The final catch-all: it doesn’t matter what you are doing, if you find yourself justifying, reporting, or otherwise endorsing a conclusion that you don’t believe in, you need to take a step back and think about exactly why you are doing what you are doing.
Where is the source of your discomfort coming from? Is there any way you can increase your confidence? If you don’t believe in the data, why are you expecting anyone else to? Why are you voluntarily committing mathematics if there is no reason too? We can still make decisions and commit to actions without perfect data, we will NEVER have perfect data, we just need to be clear with others about our reasoning for doing so.
It’s easy to preach about this stuff from a distance. As marketers, you will often find yourselves with little control over clients and their expectations, but ignoring moral quandaries can come with considerable cost. We also need to be collectively aware that there are more eyes on our industry (regulatory, journalistic, and public) than ever before, more complicated ways to mess this up, and ever greater consequences resulting from those mistakes. There is a real risk in ignoring all of this and it is growing quickly. But beyond that, if we want to attract the best people to marketing we need to make sure that we can offer jobs that we can all be proud of.