A/B testing (also called split testing) is the practice of comparing two versions of copy to determine which performs better. Version A is shown to one group of readers; version B to another; the version that produces a higher conversion rate wins.
A/B testing is what makes copywriting an empirical discipline. Claude Hopkins practiced the principle in the 1920s using coupon codes — different ads ran in different cities, and the coupon returns revealed which ad produced more responses [@hopkins1923]. Digital platforms have made this instantaneous: a headline can be tested against an alternative in real time, with statistical significance reached in hours or days.
What can be A/B tested:
- Headlines — the highest-impact test. A headline change can double or halve conversion.
- Calls to action — button text, placement, color, size.
- Copy length — short vs. long. The answer depends on the product, the audience, and the level of commitment the reader is making.
- Value proposition framing — “Save money” vs. “Stop wasting money.” Positive vs. negative framing can produce strikingly different responses.
- Social proof — testimonials vs. numbers, different customer quotes, placement on the page.
The discipline of A/B testing is to change one variable at a time. If you change the headline and the CTA simultaneously, you can’t know which change produced the result. This mirrors experimental method: isolate the variable.
The limitation of A/B testing is that it optimizes locally. It can tell you which of two headlines performs better, but it can’t tell you whether both headlines are mediocre and a fundamentally different approach would outperform either. Testing refines; it doesn’t invent. The copywriter’s creative work — finding the right message, the right frame, the right angle — precedes the test.
Related terms
- conversion — the metric A/B tests measure
- headline — the most commonly A/B tested element
- landing page — the format where A/B testing is most practical