What content strategy and technology will leading brands use in 2025?

The state of digital content

Marketing communication

Science for marketers: A guide to email campaign optimization

by Kate Lindemann  |  November 13, 2020

7 min. read
Illustration of scientists studying emails with a microscope.

Email marketing generates data points with every campaign. Over time, they add up to valuable insights about your audiences.

How you collect that data matters. To get the most out of it, you need to think like a scientist, turn your workspace into an email marketing laboratory and run a few experiments.

If it’s been a while since you last took a science class, don’t worry. We’ll refresh your memory with this step-by-step guide to email campaign optimization.

Get to know your lab equipment

Dedicated email marketing tools make running experiments and collecting data easy. Before you roll up your sleeves to start experimenting, familiarize yourself with your provider’s analytics dashboard and A/B testing tools.

Key metrics

For most email campaign optimization, the best metric to focus on is click-through rate.

Click-through rate measures how many recipients opened an email and clicked a link inside. While other metrics are great for zeroing in on problem areas, click-through rate is the best indicator for how well an email or campaign did overall.

For a full overview of email metrics and how to use them, check out our introduction to email analytics.

Illustration of man carrying a ruler.

A/B testing

For accurate results, it’s best to test variables head-to-head. A/B tests send different versions of the same email to two groups to see which performs better. Most email marketing systems automate the process, meaning all you have to do is set up each version and hit ‘send.’

You can A/B test almost any aspect of an email, including subject lines, copy and design elements. A typical A/B test for subject lines might look something like this:

Create your variants – Write two subject lines. They should be different enough that you can draw a meaningful conclusion from the results. Maybe one is straightforward, and the other is funny.

Run the test – One group of subscribers will get the email with the serious subject line. The other will get the more playful one.

Use the results – Whichever subject line gets more clicks will automatically go out to the rest of the list.

Illustration of scientists in lab coats looking at A/B test results.

Long-term trends

You don’t have to use A/B testing for everything. Before you design an elaborate experiment, look back at past campaigns and try to glean insights from the data you already have.

If certain campaigns always perform better than others, there’s no need to run an A/B test to confirm what the data is already showing you.

For everything else, the next step is to decide what to test first.

Form a hypothesis

Don’t jump in head first and start testing email components at random. Instead, take a critical look at every part of your email campaigns and make an educated guess about what changes will make the biggest difference. After all, you know your audience and campaigns better than anyone.

In some cases, analytics metrics can help diagnose problem areas. If people are opening emails but not clicking, focus on the contents of the email itself. If the open rate is low, you might need to revisit the types of emails you’re sending.

Illustration of scientists in a lab with graphs and email imagery in the background.

Tackle your biggest problems first, and then look for opportunities for smaller tweaks and fine-tuning.

Here are some examples of solid email marketing hypotheses:

  • Customers will engage more with content specific to their industry.
  • Emails with more images and less text will perform better.
  • Personalized subject lines will get more opens.

How you test your hypothesis will depend on which part of the email you’re testing. The next section will go over what you need to keep in mind.

Design your experiments

Science best practices

Before launching into individual types of experiments, let’s go over some science basics that also apply to email marketing.

Test one thing at a time

As your high school science teacher probably told you, there can only be one dependent variable. If you test multiple changes at once and see a difference in performance, you won’t know which change caused it.

Some email marketing tools do offer multivariate testing, which allows you to test multiple variables at once. This can make sense to optimize a single campaign by identifying the best combination. However, because they don’t isolate the variables, multivariate tests make it hard to pinpoint useful take-aways for the future.

Illustration of marketers standing around a screen with graphs and data on it, discussing email campaign optimization.

Have a big enough sample

For your findings to be statistically significant, you need a big enough sample size.

Mailchimp recommends sending each test version to at least 5,000 contacts for the best results. However, that’s not realistic for many mailing lists. Don’t worry – you can still get meaningful insights from test groups with at least a few hundred contacts.

Make it measurable

Every experiment needs a measurable result. Before you get started, figure out what your metric for success will be. Usually, click-through rate is the best choice.

How to test each email component

Type of content

Are you sending emails that customers want to receive? After all, fine-tuning subject lines won’t get you very far if customers are fundamentally disinterested in the content of the email.

This is one that’s hard to A/B test. Instead, look for trends in your past campaigns. Maybe your product tutorials get all the clicks, while newsletters languish unread. That’s a valuable insight.

Of course, not every customer likes the same type of content. That’s where email segmentation comes in. By grouping your mailing list into segments, you can send different types of campaigns to different people. Check out our email segmentation guide to learn more about how to split your list.

Illustration of marketer sitting on a computer, with three paper airplane messages going to three customers.

Messaging

Let’s say you’ve figured out that customers engage a lot with emails that highlight new products. You’ll definitely want to keep those in rotation. The next step is to optimize your new product messaging. Do customers care more that your product saves them money, or that it lets them unleash their creativity? An A/B test in which you emphasize cost savings in one version email, and creative inspiration in the other, should answer that question.

Cadence

Cadence is also hard to A/B test, but it’s still worth trying to optimize. One approach is to simply increase or decrease the frequency of your marketing emails, and watch what happens to your clicks and unsubscribes.

Or you could manually split your list, and send each group a version of the same campaign with a different cadence. This approach is a bit more work, but also more accurate.

Whichever approach you choose, don’t forget to compare total click-through rates for the campaign. Emailing customers more frequently might mean fewer clicks per email, but more overall.

Illustration of a woman sitting on a graph, measuring email cadence.

Subject lines

Subject lines are one of the most popular parts of an email to A/B test, but be careful not to overestimate their importance. Content and messaging will have a bigger impact on performance.

That said, email subject lines are very easy to A/B test. So once you’ve optimized in other areas, you might as well try out a few different ideas. You might spot some surprising patterns.

Copy

Email can be a good testing ground to see what tone and voice work best for your brand. Can you charm your customers with a little humor, or do they respond better to a straightforward approach?

As you optimize copy, pay particular attention to the call to action. It’s the last chance to persuade customers to click, so it’s worth really dialing it in. If ‘Start Free Trial’ outperforms ‘Try Now’ in an A/B test, you know which to use going forward.

Illustration of three marketers swapping out email content.

Design

Email design is more than aesthetics. Good design choices guide the reader through an email and encourage them to click. A/B testing for design can range from comparing two totally different visual formats to nitpicky fine-tuning.

Once you’ve gotten the big-picture questions like email type and messaging settled, the possibilities for micro-optimizations to email design are endless. Does using a different product photo result in more clicks? What color call-to-action button performs best? Should the text go below the image or above it? For more ideas for strategic design improvements, check out our email design guide.

Analyze the data, and grow your expertise

When scientists conduct experiments, they’re zooming in on a tiny question in a niche field. But all the knowledge they gain over their careers adds up to make them an expert in their area of study.

As a marketer, you want to become an expert in your customers.

Email campaign optimization is great for improving the performance of individual campaigns, but it also helps understand your customers better. It generates field-tested insights about how they interact with emails, how they see your brand and what messaging they respond to.

The more insights you gain, the better your campaigns will get (and the less you’ll have to test and optimize). In the meantime, put on your lab coat and start experimenting.