Ok, I’ve done that old blogger trick of making the headline more controversial than the more balanced copy below it, but still, action standards will drive you to an early grave won’t they?
I’m a quant researcher and more than that, one that works for an organisation for whom big databases are a huge commercial advantage over some of our competitors. Benchmarking is massively important and whatever you are doing you should try to set sensible benchmarks at the outset and, broadly speaking, stick to them. However, it’s equally important, if not more so that you don’t become a slave to them at the expense of collective judgement. This can lead you to the sort of action driven dead-end that I talked about in one of my previous posts.
The thing about action standards is they’re essentially fairly arbitrary. Say you want to be in the top 25% of ads in our database for branding. That’s a pretty sensible place to want to be – regardless of your views on pre-testing or your philosophy of how advertising works, I think we can all agree that making sure people remember what brand your ad is for is important. The question is how strongly you mandate that benchmark – even more so in a market like India where you would be typically testing in a minimum of two very different markets. Say you end up with your top 25% score in Delhi, but in Madurai you’re only top 40% – what then?
Of course, the sensible answer is that you take a judgement call amongst all the stakeholders – which of these markets is most important to me, have we identified some clear steps we could take to boost the branding score in Madurai, do we just bloody love this ad for reasons completely unrelated to how well branded it is and think we should just run the bugger anyway. Importantly, this last one is just as valid a reason as the first two in my opinion.
Whilst that seems like the obvious solution, too often it doesn’t happen. Too often brand teams are not given the flexibility they need around action standards. I absolutely understand why that is – I understand that there are massive competitive pressures to always deliver top quality advertising – but however much you test, you cannot completely remove risk. Research helps you measure risk and take a calculated decision about how big a risk you are taking – but it does not remove it completely. At the end of the day, a brand team will have targets they need to meet – if they choose to run an ad and don’t meet them, that’s their call and they are responsible for that decision. Risk aversion is death.
In my opinion a slavish adherence to action standards is more damaging to pre-testing’s value than almost anything. It makes no sense to junk an ad because it misses out on two key metrics by a couple of percent each – but it happens. We need to do a better job of communicating to clients how they should be used.
Action standards should be set to inform judgement, not to replace it.