This June/July, over at SimpleVerity, we’ve been running some very tiny, super-targeted CPC ad campaigns on LinkedIn to recruit trial users of our system. We’ve noticed the following odd behaviors, and I wanted to ask the Web if anyone else had seen this:
– Lower bids (bottom of the range) outperform higher bids on both CPC and conversion (learn more form on my landing page). Weird, but perhaps explicable; it could be that for our particular appeal, the “lower ranked” ad slots work better.
– Temporary “spike” in CPC, and BIG spike in conversion, right after a change to the campaign configuration. This is strange, because it seems to result not only in a change in how the LinkedIn users react (could be a result of the ad rotation algorithms after a change), but also how they behave once they get to my landing page. No theory for how this works.
– Auto-Optimize among campaigns seems to prematurely start making optimization decisions before reaching anything close to statistical significance. For example, I was running a campaign where the long-term equilibrium average CTR seemed to be about 0.9% (not bad, right?). I had 15 variations running. However, the auto-optimize algorithm started heavily favoring a few ads before all variations had received even 1,000 impressions. That’s just crazy; the ads being disfavored were about 50/50 bets at that point to exceed the average, yet the algorithm was “optimizing” them into oblivion.
My takeaway is that, at least when working with heavily targeted LinkedIn CPC ads, you have to stay very, very close to the “metal” and pay attention to oddities. Don’t assume that paying the highest bid (or even paying more than the bare minimum CPC price) will work better; relentlessly experiment with spend and bid. Also, don’t let auto-optimize kick in until you are comfortable that you are approaching significance on the expected CTR (at the very least, the CTR that you are hoping to “beat”).