karl taylor

4 minute read

The Problem With Performance-Based Pricing Models

https://raw.githubusercontent.com/karljtaylor/kjt/blog/content/assets/c2292-1tbxoqb8muuegx4wenv68zg.png

Over the past few weeks, I’ve grown increasingly uncomfortable with client deals structured around metrics like cost per click. It’s probably a little out of character, but as these alternative arrangements gain popularity, it feels worth pointing out that we may still have a few things left to work out.

Metric based pricing is one of the more promising precursors to performance-based pricing. Structuring a deal around a metric (in theory) makes for better alignment between in-house teams and out-of-house talent. When everyone can agree who is responsible for what (the argument goes), it’s easier to hold individual players accountable for their performance.

The appeal of results-driven evaluation is kind of hard for a perfectionist to learn to ignore. As such, we’ve used variations of performance-based pricing over the past few years to varying degrees of success.

I want to love this pricing model. It makes so much more sense. It spares my production team from having to explain the rules of one genre or another to an uninterested client. It saves my accounts team from hours spent churning out reports no one will ever read (we check.) It spares my clients from the ambiguity that comes when picking between poorly differentiated service providers. It has so much potential.

The truth is, in its current form, click-based pricing can’t work.

For the sake of example, I’d like to paint an exceedingly simple picture.

Let’s say you’ve been managing a Facebook page that has 1,000 likes. About 100 people see organic posts. Each post generates 1–2 clicks.

Those are pretty decent numbers. A 10% organic reach is a feat, and 1–2 clicks for 100 reach would be 1–2% CTR. It could always be better, but you’d be right to be satisfied.

Let’s say for the sake of example, that you aren’t, and you draw up a post promotion ad. You target 100,000 people near your business who are interested in one of your larger competitors.

Your promoted post reaches 5,000 additional people and generates 50 clicks.

How do you feel?

If you’re a small business owner, you’re probably angry you didn’t get the number of clicks you were expecting.

If you’re an advertiser, you’re probably excited that your ad scaled perfectly.

The trouble with a click-based pricing model is that in this situation, neither the advertiser nor the client is having a positive experience.

The perspective of the advertiser is easiest to speak to. When you’re promoting a post, you have a little bit of a leg up. Because you know how the post has performed historically, the only trick is figuring out the extent to which that performance is a function of the page’s audience. If you can correctly identify the larger subset of the population, the post needs to be shown to; you just need to make sure that the metrics are in keeping with the reference value. As you do this, you’re able to grow the reach of an ad without compromising the performance.

The perspective of the business owner is slightly more challenging to unpack. Still, I think that it’s important to remember that very few people elect to work with a specialist who creates more problems than they solve.

When you’re working with someone to promote your business, it’s hard to hear that the reason your ad didn’t get the number of clicks you were hoping for is that your website isn’t loading fast enough or you need to try a different graphic. Still, the truth is these aren’t “opinion” statements anymore, any agency that was willing to sign a performance or metric based agreement likely has the data that’s pointing clearly towards whatever problem you’ve got.

Off-hand, I can think of six or seven factors that might influence the performance of a clicks campaign. The ask in the ad, the message, the creative, the targeting, the load time on the landing page, and even the popularity of the page promoting the post can all influence performance.

The truth is a complete list would quickly push this post into the unreadable territory. But I think it’s enough of a record to highlight a problem.

You can’t evaluate someone’s impact on a metric unless they’re empowered to influence the factors that contribute to that metric.

In-house we can address each of those elements, but accommodating scope creep in performance-based agreements is a new kind of problem.

And for that matter, one I’m not sure I’ve seen a good solution for.

comments powered by Disqus