Joust

So it happened again. A terrific creative idea that was loved by all, that rocked in qual research (conducted by a moderator who I respect, asking questions that make sense) and that was within our budget then goes into a well-known copy test and fails. No, not just fails, catastrophically implodes. Let’s just say if we’d shown kittens being skewered we couldn’t have achieved a lower score. And as I look at the creative and the results I can’t fathom why.

Clients are increasingly relying (leaning?) on these black box research methodologies to make their communication decisions for them and at the end of the day the agencies (and yes the clients too) are left shaking their heads wondering why a piece of creative that on the surface seemed like a winner could flounder so terribly in a standard copytest. Frankly I think it’s a crime that we’re so dependent on that magic number to pass or fail. All the suppliers tout that their results have been validated in the market place but I’m not buying it. So I have no answers, just a few pointers to at least increase our chances at getting decent work through.

Know Thy Enemy

You’d think given that we’re so beholden to these methodologies that we’d know more about them. But we don’t. Have you ever asked Ipsos or Millward Brown or AdLab if you can take the survey yourself (obviously your results would not be counted)? It’s worth doing – and getting the clients and their research partners to experience it as well. It’s a bit of a shock to see that 30 second animatic reduced from the glorious full screen on which you previewed it shrunk to a screen within a screen (and in some cases with no way to expand to full screen). All those important nuances that you baked in are now unnoticeable. Let’s make our animatics blindingly simple and tailored to a 2” by 3” screen. Not to mention how tedious completing the questionnaire is. What does “How relevant is this idea to you?” actually mean?. We should think long and hard what respondents might have meant when answering them before we react.

 Insist on Quality Control

Ask if you can get all the un-coded open end responses. I was stunned by one study where it became very apparent from the responses that almost one in five respondents believed the animatic was a finished spot (responses like, “I don’t like cartoons,” and, “these drawings are amateurish” are a pretty good indicator). I was even more surprised that about 10% of respondents didn’t bother to complete any of the open ends (unless you count “qwerty” or “dfgh” as useful responses). These are recruited respondents, they agreed to be part of a panel and to share their opinions, if one in ten can’t be bothered to fill out the open ends then their answers to the closed ends should be questioned. That’s 30% of the sample who answers I don’t trust. We should hold these suppliers more accountable – do they make it abundantly clear that respondents are not viewing a finished idea, do they kick out the disinterested?

Manage Expectations

There are so many measurements on which our ideas could die. Let’s make sure that if they die, they die by the right sword! Why are we looking at “tells me something new” indices when we have nothing new to say or worse still we have something new to say but it’s boring? Do we expect the “is for someone like me” score to increase if we’re using penguins in our idea. Let’s identify the core measurements we want to succeed on and not default to fabricated indices that don’t take into account an idea’s specific objectives.

Ed Caffyn

 

 

 

 

 Ed is originally from the UK. He prides himself on having an opinion on pretty much everything. He thinks strategy is sacrifice and the heart always wins over the head. He’s worked in such diverse categories as condoms and children’s candy. He calls BBDO Toronto home. Follow Ed at @EdCaffyn

About Ed Caffyn

Ed Caffyn has written 2 post in this blog.

SVP, Director of Account Planning @ BBDO Toronto