In my last post I wrote about the ethics of using neuroscience techniques for market research. I’d like to broaden that out to make a wider point about the nature of tools. Any type of tool, research tools included, is value-neutral – they are neither moral nor immoral. Morality only comes in to play at the point at which we, as humans, make use of the tool. The manner in which we use it determines the morality of the action rather than the nature or utility of the tool itself. Morals are, after all, rules that judge the acceptability of human conduct.
Of course, moral absolutism is a dangerous thing, even if the absolute proposed is that an object is without moral valence. Some would argue the tools of warfare are a clear exception to the rule – if a tool is specifically designed to cause death and destruction in the most efficient way possible then it’s harder to argue the value neutrality of all tools. However, the tools of war can also be used to defend yourself against crazed invading ideologues trying to impose a morally corrupt way of life on a population (like Hitler in Europe or the US in South East Asia). There are also those who argue that the military industrial complex acts as a deterrent and, ergo, the tools of warfare actually prevent death and destruction. I don’t necessarily agree with all that, but as there’s some debate around it, it seems that even for a tool actually designed for a fundamentally immoral purpose is only really immoral if we use it to that end.
Think of an axe. An axe can be used to chop firewood and sustain your life. An axe can also be used to hack someone you don’t like very much to bits, or even hack someone you do like to bits for kicks. Now forget the axe, it’s the 21st century. Think of a chainsaw. You can use a chainsaw to do both the wood chopping and the person hacking with significantly greater efficiency.
There are two things I’d like to draw from this and, hopefully, you’ll see the parallels to what I was talking about in my last post. Further, we should also be reflecting on what this means for advertising and brands as tools (this value-neutral business is actually something I trot out more often to people who argue brands are evil that it’s something I say to research haters).
First, both axe and chainsaw are morally neutral. You can use them both for good or evil and any number of layers in between. In essence a chainsaw is a more efficient axe. This means that it can be more effective for doing good or for doing evil but it doesn’t make the chainsaw itself any more of one or the other.
Second, there is another interesting effect of the chainsaw being more efficient. I am of course using the wood chopping to represent the ‘good’ extreme on my moral axis – but once you mechanise and industrialise wood chopping and you’re not just chopping the firewood you need to sustain yourself, the moral waters become much more muddy (or at least, they get filled with logs, making them equally hard to navigate). It’s still the human action that’s at question, but a more effective tool facilitates excess and thereby brings questions of morality to the fore even when considering actions where previously there were none.
So in short, what I was trying to say in my last post is that if you think advertising and brands are evil then of course the research tools that help facilitate them will make you equally uncomfortable. They’re not evil, but the way we use them is where we should stop and think about ethics.
If you have a new research tool that is more effective in exposing people’s desires, then the chances of a chainsaw massacre of consumption increase exponentially. If neuroscience is giving us more effective (indeed, more mechanised) access to thoughts, feelings and emotions, even if we don’t think the every day wood chopping of building brands is morally questionable, the questions around how we implement neuroscience techniques ethically are still valid as mass deforestation won’t be good for any of us.