Releasing a lot of different versions of the same thing and seeing which one performs the best has by now became so notorious that even the most backwards-thinking clients have heard of it. Yet, while everyone kind of agrees that digital is a perfect environment for testing-and-learning, there are still very few real-world advertising examples to back up such claims. (Notable attempts by individuals aside.)
More correctly, in fact, is to say that there were a few examples until very recently. Some days back I read about the performance of Digg's experimental ad network that they started on their site at the beginning of the summer. Now, what I find more interesting than the core premise of the model (allowing users to vote ads up and down) is how the advertisers responded - they started releasing different versions of the same ad, to see which one gets most diggs. For example, according to Brian's Adweek article, Toyota featured 8 different ads, each with two sets of copy. In other words, to increase the chances that an ad will be liked/clicked on, Toyota ran a test.
So, now - since there is no better measure of success than actually monitoring that success - why is not EVERYONE doing this same thing? I mean, why the audience voting has not become an inherent part of web-wide ad networks? Of course, there's a non-negligible question if the model can work outside the Digg environment, but the current returns (click-through rate that reaches 2 in some instances) are such that it seems at least worth trying to find out.
At least, that's what Huffington Post thinks. Apparently, they've started doing testing to some of its headlines (this actually reminded me of the same thing that Elle now regularly does with two versions of its covers, but wonder what they learn from the results since the feedback loop is so long and celebrities on the covers are not repeated for a long time). Anyway, visitors to HuffPo randomly see one out of 2 versions of a headline for the same story. Then, the editors give it 5 minutes, and based on this 5 minute face-off between the two titles, they select the winner. Real-time headline testing? Kind of great.
And - if it works for content, why wouldn't it work for ads? Try it.
(the image found here.)