It’s the age-old debate of every email marketing conversation: “when is the best time to send an email newsletter?” The answer is — there isn’t one best time. Yes, you read that right. If you want to increase email engagement rates, it’s not as simple as picking a certain day or time.
Similar to Farmers Insurance, “we know a thing or two because we’ve seen a thing or two” when it comes to email marketing. Every year, we study over 100 billion emails to curate an annual report about email marketing trends and engagement. And do you know what we’ve found? The best time to send an email newsletter varies by industry, audience, and engagement goals. There is no one-size-fits-all time to send an email newsletter.
The core of email marketing engagement is a newsletter tailored to your product, brand, and target audience. To accomplish this, it’s essential to continually test, analyze, and optimize your email campaigns. What does this look like in real-time? Let’s dig in.
Test your emails
The foundation to perfecting email engagement is testing what works and doesn’t work for your audience in every aspect. This includes testing the time of day you send, subject lines, copy, graphics, and other key elements of the email.
Note that this may be different for each audience segment, product, and type of email (i.e., feature announcement vs. welcome email) you send. It may sound overwhelming to test so many things with multiple segments, but thankfully there’s a systematic way to approach email tests that will simplify uncovering trends: A/B testing.
1. Segment your email subscriber list
To segment your subscriber list, divide your email list into smaller lists according to key characteristics, such as demographic, business type, purchase behavior, or location. Segments will allow you to see what has the most impact on each brand audience as well as provide more targeted email marketing in the future.
Ideally, your email marketing platform should have a segmentation tool that will make it easy to do. Here’s how it works on Campaign Monitor’s platform.
2. Form a hypothesis
Once you have segmented lists, it’s time to form a hypothesis, or “educated guess,” just like you would in a scientific test. To develop your hypothesis, first pick a segment of your list to focus on, then pick a single element to test that’s key for that group.
For example, you may make an educated guess about what the outcome would be of changing the time you send welcome emails. Similar to setting a goal, your hypothesis should be S.M.A.R.T. (Specific, Measurable, Achievable, Relevant, and Timebound). In this case, your hypothesis could be “sending welcome emails within 10 minutes of a user joining will increase email open rates by 6% over the next three months with the new user segment.”
3. Split each segment into an “A” and “B” test group
Now that you’ve formed your hypothesis, split the subscriber segment in two: an “A” group for your control group and a “B” group for your test group.
Split the segment equally at random to ensure the results aren’t skewed one way or the other. The easiest way to achieve random group selection is to use an email service provider (ESP) that has built-in A/B testing.
Assess if each group is large enough to provide statistically significant results to ensure the most accurate data. If the groups are too small or not varied enough, the test will be prone to just reflect the results of randomness. Whereas a larger group will increase the accuracy of results by reducing the probability of randomness.
A statistically significant group is determined by a few factors and a lot of math. If you’re not a statistician or just don’t like doing math (because who does?), you can easily find the right size by using an A/B test calculator. A good starting size is usually at least 1,000 subscribers, but again, that can be lower or higher depending on the test and the subscriber list.
4. Create “A” and “B” test assets
To test a specific aspect of your email, create two variations of the same email with just that single element changed to reflect your hypothesis.
For example, create two identical welcome emails, but send one at the time you typically send your welcome emails and one at the time reflected in your hypothesis. Following the hypothesis example above: if you typically send your welcome emails two days after the user joins, send your control email at this time. Your test group email could be sent 10 minutes after the new user joins to test the effectiveness against your baseline results from your control group.
The only thing different between the two emails should be the time you sent them. If you were to test more than one element, it is called multivariate testing. For example, a multivariate test would be if you were testing both the time the email is sent and different subject line. You should only use multivariate testing when you are testing combinations of different elements. And it’s best to implement multivariate testing only after testing each individual element.
For example, after you test and find the most effective time to send your email, you can then combine it with winning subject lines to measure the combined impact. If you attempt to test all aspects of an email at the same time, it can be difficult to determine which is contributing positively or negatively to the overall outcome.
5. Run your test on a platform that can measure results
Now it’s finally time to hit play on your test. Make sure you send your email from an ESP that has a strong analytics dashboard so you can easily measure and assess the results. Remember to isolate all variables except the one you’re testing. So if you’re testing send times, don’t write different subject lines and send on different days of the week or different times of day. Include the same subject lines in both emails, and just change the time sent.
Analyze the data
Once you’ve run your test, it’s time to assess the outcomes and determine if your hypothesis was correct or not. When testing the hypothesis above, for example, look at open rates for each email segment to measure the impact of send time. Whichever group had the highest open rate would be the “winner.”
If you’re using an ESP that has built-in A/B testing, the platform should do most of the hard work for you. For example, in Campaign Monitor’s A/B test analytics dashboard, you can view graphs of your results and conversion values all at the same time.