Black Magic

When I wrote about the barriers to neuroscience adoption a while ago there was one issue I skirted which probably could have been fleshed out a bit more – ethics.  When I did the talk at XLRI I basically lumped it in with ‘Scpeticism’ but I got asked about it again in a meeting yesterday so thought I’d note a couple of thoughts.

The reason that neuroscience/biometric methods are appealing to researchers and marketers is also the thing that makes people unsure as to whether it’s ethical to use them.  It’s well known that people don’t know why they behave the way they do and find it hard to talk about the way they feel about things – so cutting out the middle man and going direct to source for that info is pretty valuable.  But those worried about this stuff would say that the middle man we’re cutting out is sort of the bit where our  humanity resides.  People may, justifiably, want to keep some things under wraps for any number of reasons (although most often we’re learning things that people can’t tell us rather than things they won’t).  As a result, those who doubt the ethics of biometric measurement would argue that we are taking knowledge that people have not volunteered and, like black magic, using it to control them.

Those are potentially valid concerns.  My argument, however, is that they are not concerns specific to neuroscience/biometric research but to all research and, by extension, any advertising that draws learning from that research.  If you have an ethical problem with research and advertising generally (which some do) then fine – but people raising these concerns with me in meetings are usually people spending a lot of money on advertising and research, so I assume they don’t.

Let me illustrate my point.  As Tom Ewing has ably pointed out, what those critics of research who roll out the ‘all research is wrong‘ line fail to understand is that we already know that it is (in an absolute sense).  As such, we have all kinds of ways of finding stuff out about people’s thoughts, feelings and desires without asking them directly.  Take one of Millward Brown’s many hundreds of tracking studies, I don’t ask people to rank the associations they have with brands by how important they are because I know they’re not very good at it (and often, they just don’t know whether sexy or easy to use is more important to them).  Instead, given that I also ask them about their preference for brands, I am able to run myself a lovely regression model from which I can infer which of these associations are the stronger drivers of preference either within the category as a whole or for specific brands.  They haven’t told me why they like Persil better than Surf, but sneaky researcher that I am, I’ve found out – and maybe Persil will use that information to better target them and people like them.

(Before you say so, I know that’s not perfect, I know my model is only telling me about the relative strengths of relationships between claimed brand preference and a set of associations that I chose to put to the sort of people I chose at the specific point in time, by now already in the past, that I deigned to ask them. But I’d ask you to focus on the matter in hand, this isn’t a question of which [if either] of these approaches is most valid it’s about whether or not they are ethical).

So research has, for many years, generated insights into human behaviour without people necessarily volunteering them directly.  People are no more aware of my regression models and how brands are using them to sell them more soap powder, than they are of the brainwave outputs they voluntarily hand over and how clever boffins run those through an algorithm to give me something I can use to understand advertising better.  Is either of those things deceitful and insidious?  Maybe, but if so, I say they are equally so.

Further, some perspective is required.  We may be increasingly tapping into people’s deep, unspoken desires – but we’re not homing in on a magic bullet that lets advertisers push a button that has a definitive and predictable consumer response at the other end.  We are gaining a better understanding of why an ad has worked when we do get it right of course and therefore narrowing the odds of us being able to repeat it – but we’re a long way off complete mind control.  People still have agency.  That’s not going anywhere.

As long as the usual rules apply – we’re doing this at an aggregated level, we’re not using it to target the specific people involved through direct use of their personal details or data, they have given us their express, informed permission to collect whatever data we’re collecting through whatever method and so on – I really don’t see how neuroscience and biometric research is any different from ‘traditional’ methods.  If you object to one, you object to it all.

6 thoughts on “Black Magic”

  1. Guess the difference is in traditional research, the consumer filters and volunteers the information whereas in neuroscience, chances are that the consumer ends up sharing more than they realise!

    1. Yes, but my argument is that even with the relatively blunt instrument of a structured quant questionnaire, filled in with old fashioned pen and paper, by the time we’ve aggregated the data and fed it through a proprietary model here and a statistical mincer there, the output would be unrecognisable to the person we’ve interviewed as something they’d told us. They’re already sharing more than they realise.

      It’s very rare, with attitudinal data especially, that we’re actually using what the respondent tells us directly – because that’s actually not very useful.

  2. ummm, thought about your comment, agree to the fact that aggregate learning post analysis is much larger than what is shared at an individual level by the consumer.
    Some of the ways to making neuro science an accepted practice, or atleast loosening the barriers could be
    a. user friendly knowledge dissemintaion , most people don’t know enough about the subject and what they know is heresay and incomplete. The subject if explained in simple graphics may atleast succeed in breaking the myths surrounding it.
    b. The use of ‘co-creation’ of the do’s and dont’s of the process. When you know how it works, and you have helped set guidelines, you are normally not afraid of or resistant to it. In fact, quite the contrary.. you might actually want to invite the process. People do realise that the human mind is complex and are forever looking for answers, so they are likey to become co travellers on the journey. And as the process becomes familiar, you slowly and surely keep pushing the boundaries of the guard rails.

    1. Your first point is something we’re struggling with every day. On the one hand it does need to be simply conveyed and accessible as all metrics do – of course, part of that will come with familiarity, complex issues soon become simple once you’re familiar with them. However, one of the key strengths of a lot of neuroscience research is their depth and granularity so you only want to simplify so much before you start to lose the value they add over and above traditional methods.

      Having said that, none of that is anything new. Trying to balance the complexity and depth of a piece of research with the need to convey it in an accessible and impactful way is something researchers have grappled with (often unsuccessfully) since the dawn of time.

Leave a reply to Charles Cancel reply