Bryan walked into my office the other day to point out an interesting item found while surfing: a left-sided add-to-cart button on a product detail page.

We chatted back and forth about the conversion issues involved with placing it there — and in fact, one of our Conversion Analysts, Peter, commented on this very topic in his latest post — but soon our conversation turned to something much more interesting than left-sided calls to action: the testing of left-sided calls to action.

“Do you think they tested it?” Bryan asked.

“Hmm, the Joker in me wants to say Yes, but I’m guessing the money bet is No,” I replied.

Now, that’s not because Crutchfield doesn’t test. In fact, I’ve no idea at all what sort of testing culture Crutchfield nurtures; I’m just saying that in our experience, only rarely does this sort of innovation ever come about from testing. Instead, it’s sadly de rigeur for it to arise from a designer wanting to try something “different”, or an IT staff that doesn’t perceive one shopping cart as different from another, or maybe Matilda the Intern just forget an HTML tag. Anyway, the point is to go with the simplest explanation — which, in 2008, is that most companies still don’t test.

“I think you’re right,” Bryan continued, “cuz if they did test it, it probably wouldn’t do well.”

“Maybe some Clown in IT or Marketing just wanted to be ‘kewl’.”

Here’s what we’re talking about, as shown on

Intuitively, I hope you’ll agree with us that right-sided feels like a better than even-money bet (though that in itself is a reason to do a test) — but what’s the point of leveraging your intuition to be “directionally correct” unless you eventually try to back it up with some evidence that you’re actually correct?

That started me down the road thinking about how to actually test this hypothesis.

(I can be wordy, so if you’ve lost the trail of thought, the question is, “Which converts better? Right- orLeft-sided Add-To-Carts?” and the hypothesis would be, “Right-sided Add-To-Carts convert better than Left-sided Add-To-Carts.”)

Here’s where it gets interesting: The supposition is that most Web surfers are so used to right-sided Add-To-Carts (and right-sided Calls-to-action, generally) that a left-sided one is bound to produce some cognitive dissonance. It might not be consciously noticed — less so on “narrower” sites and more so on wider ones — but the placement on the left will “feel” odd.

With that in mind, just how do you go about running a test you already know has a skew to it? How would you really determine whether the Clowns or the Jokers win The Great Add-To-Cart Positioning Debate of Aught-Eight?

Here’s what I would do: First off, start with the most obvious test, because we have to get a quick benchmark of just how far Clown is from Joker. Throw some percentage of traffic at the left-sided Add-To-Cart — enough for some statistical significance — and see just how well Right does vis-á-vis Left. (The fascinating thing about intuition is that a fair percentage of the time it’s fabulously, gloriously, achingly, wrong — and if this is one of those times, better to find out early and move on to the next good idea.)

Assuming we’ve shown some evidence of the skew in favor of right-sided shopping carts — otherwise, why continue reading this post? — how do we go about removing the skew that comes about from people being “trained” that right-sided is “normal” to answer the real question: If folks weren’t biased by convention, which side converts better?

To do that, what you’d really want is to look among your customers who’ve already successfully converted using one particular side and to present them with similarly-sided add-to-carts in the future (hmm, might have to set a cookie!), so you can gauge what the conversion rate is for people who’ve shown at least some indication that they can successfully convert.** The idea here is that, all else being equal — something the pre-existing bias hurts — the true question should be, “Do people actually have a preference for sidedness at all”?

By picking only from those who’ve successfully converted previously, you’re making a first attempt to say, “Hey, at least these folks don’t seem to be impeded by a systemic bias”; therefore, those who buy consistently using left-sided calls to action might then be expected to convert at approximately the same rate as those who buy consistently using right-sided calls to action.

“And surely,” you might argue, “those who show a preference for left-sided add-to-carts should convert better when consistently presented with left-sided add-to-carts than Right-Siders who are suddenly presented with a left-sided add-to-cart.”

See, you’ve switched the tables.

Get it? In short, you try to come up with series of tests — a Testing Campaign, if you will — which attempt to disprove the way your original hypothesis was leaning (we figured Right would do better, so let’s design tests that indicate when Right does poorer), and let us challenge any underlying bias (i.e., that Add-To-Cartss typically appear on the Right) that gives unfair advantage.

Well, those are my thoughts on the subject. What I hope you got out of that is that a “culture of testing” means thinking as deeply about the design of experiments as it does their performance.

I’d love to hear more about you. Are you a “Clown” or a “Joker”? Or are you just “Stuck in the Middle”? Would your brand loyalty or the customer’s familiarity with your site’s User Interface simply override any preference you have for being a Clown or a Joker?

– – – – – – –

**A few readers will feel reassured to know that, in actuality, you’d still send at least a few visitors who preferred one Side to see an opposite-Side call-to-action once in a while just to keep things honest; enough to get insight from the data, but not enough to cost the company too much from the loss from the expected conversion differential. I figured I’d say that as a footnote before some Sharp Tack out there writes in to scold me. 😉