When I wrote about the barriers to neuroscience adoption a while ago there was one issue I skirted which probably could have been fleshed out a bit more – ethics. When I did the talk at XLRI I basically lumped it in with ‘Scpeticism’ but I got asked about it again in a meeting yesterday so thought I’d note a couple of thoughts.
The reason that neuroscience/biometric methods are appealing to researchers and marketers is also the thing that makes people unsure as to whether it’s ethical to use them. It’s well known that people don’t know why they behave the way they do and find it hard to talk about the way they feel about things – so cutting out the middle man and going direct to source for that info is pretty valuable. But those worried about this stuff would say that the middle man we’re cutting out is sort of the bit where our humanity resides. People may, justifiably, want to keep some things under wraps for any number of reasons (although most often we’re learning things that people can’t tell us rather than things they won’t). As a result, those who doubt the ethics of biometric measurement would argue that we are taking knowledge that people have not volunteered and, like black magic, using it to control them.
Those are potentially valid concerns. My argument, however, is that they are not concerns specific to neuroscience/biometric research but to all research and, by extension, any advertising that draws learning from that research. If you have an ethical problem with research and advertising generally (which some do) then fine – but people raising these concerns with me in meetings are usually people spending a lot of money on advertising and research, so I assume they don’t.
Let me illustrate my point. As Tom Ewing has ably pointed out, what those critics of research who roll out the ‘all research is wrong‘ line fail to understand is that we already know that it is (in an absolute sense). As such, we have all kinds of ways of finding stuff out about people’s thoughts, feelings and desires without asking them directly. Take one of Millward Brown’s many hundreds of tracking studies, I don’t ask people to rank the associations they have with brands by how important they are because I know they’re not very good at it (and often, they just don’t know whether sexy or easy to use is more important to them). Instead, given that I also ask them about their preference for brands, I am able to run myself a lovely regression model from which I can infer which of these associations are the stronger drivers of preference either within the category as a whole or for specific brands. They haven’t told me why they like Persil better than Surf, but sneaky researcher that I am, I’ve found out – and maybe Persil will use that information to better target them and people like them.
(Before you say so, I know that’s not perfect, I know my model is only telling me about the relative strengths of relationships between claimed brand preference and a set of associations that I chose to put to the sort of people I chose at the specific point in time, by now already in the past, that I deigned to ask them. But I’d ask you to focus on the matter in hand, this isn’t a question of which [if either] of these approaches is most valid it’s about whether or not they are ethical).
So research has, for many years, generated insights into human behaviour without people necessarily volunteering them directly. People are no more aware of my regression models and how brands are using them to sell them more soap powder, than they are of the brainwave outputs they voluntarily hand over and how clever boffins run those through an algorithm to give me something I can use to understand advertising better. Is either of those things deceitful and insidious? Maybe, but if so, I say they are equally so.
Further, some perspective is required. We may be increasingly tapping into people’s deep, unspoken desires – but we’re not homing in on a magic bullet that lets advertisers push a button that has a definitive and predictable consumer response at the other end. We are gaining a better understanding of why an ad has worked when we do get it right of course and therefore narrowing the odds of us being able to repeat it – but we’re a long way off complete mind control. People still have agency. That’s not going anywhere.
As long as the usual rules apply – we’re doing this at an aggregated level, we’re not using it to target the specific people involved through direct use of their personal details or data, they have given us their express, informed permission to collect whatever data we’re collecting through whatever method and so on – I really don’t see how neuroscience and biometric research is any different from ‘traditional’ methods. If you object to one, you object to it all.