How-to: Testing your email marketing campaigns
Published on 4 October 2009 | Author Stefan von Lieven0
Email-supported marketing offers a variety of benefits. Amongst the most important is the option of exact measurement and the possibility to specifically optimise campaigns. Instead of relying on a crystal ball or someone’s personal opinion, the precise analysis of different variants of a mailing makes it possible to draw real conclusions for future marketing.
Generally, in campaigns, we distinguish between A/B tests and multi-variant tests, as well as pre-tests and split runs. Split runs (mostly an A/B test with exactly two variants) are the division of the entire email dispatch process. This means, one half of the mailing list receives variant A, the other half variant B. The results can then be applied to future mailings.
This test variant is often used for newsletters, e.g. to test a new design with a reference group.
A pre-test is the testing of different variants with small cross sections of the mailing list, prior to the actual dispatch process. Here, for example, five alternatives, are tested with a randomly chosen five percent of the mailing list in order to subsequently send the most popular variant to the remaining 75 percent of the mailing list.
These pre-tests allow a direct and potentially significant improvement of campaign results. They are however – as frequently a large number of variants (multi-variants) are tested – more costly and require more forethought due to the longer measurement period.
Vital in both methods is the random selection of users so that effects, e.g. from the address age, will not distort the result.
If you are working with individualisation and segmented customer groups, you should also take into account the following: First, build the segments, which are relevant for your business model and then test the different variants separately. Otherwise, you will only improve the cross section of all segments.
Prior to testing, define the core parameters you wish to optimise, e.g. the conversion rate. Then provide for the correct collection of all relevant benchmarking results.
In the following, you will find a few options for tests and in brackets behind them the parameter to analyse:
The sender is the most important criterion for trust and a central identification characteristic for the classification of relevance. With the choice of sender name and address, you can test your company name, a specific person, or something completely different. In general however, it is recommended to keep consistency, as the sender will be memorised and recognised.
The subject line offers considerable potential for the improvement of opens. Simple changes in the order or specific key words, such as brands, can often achieve a lot.
Here you can find a few tips how to change your subject lines.
Date and Time (opens / clicks)
Emails are more commonly read if the recipient is sitting in front of his computer at the time of dispatch. Therefore, the email does not have to compete with a longer list of other unread messages. A test about the day of the week and the time when this is typically the case, can therefore improve open and click rates.
With suitable software solutions, such as ELAINE, the optimal moment can be detected automatically and customer-specific dispatch times can be used.
Content (clicks, conversion rate)
Good content is the real key to relevance and good performance figures. However, in addition to the general question of exiting content and the correct volume, many minor things can achieve a lot here.
In retail, it is important to test the type and number of offers. A discount of 10 percent or $ 10 can mean a significant difference in the conversion. Also, the creative format and the differentiation of customer segments can have an influence. In B2B, the following applies: Less is more. Bullet points and crisp information help to use customers’ limited time to the optimum.
Style and layout (clicks, conversion rate)
At the latest when it comes to design issues, facts from the analytical evaluation are often laid aside. However, here in particular, we have to break away from corporate design dogmas or pure matters of taste. Test, for example, the type and size of images. Large header graphics often increase the click rate, but they should not divert from call to action. The call to action and all substantial arguments for it should always be above the fold.
Personalisation (opens, clicks, conversion rate)
Personalisation is a simple possibility to generate individual relevance. Test your personalisation in the subject line, but above all, in the content. Depending on your data stock, the possibilities can go much further than addressing recipients by name. Test, for example, personalisations regarding the recipient’s location or interest, as well as previously purchased products or services.
Individualisation (clicks, conversions)
The basis for individualisations are valid data and an adequate margin for content. Test, for example, gender-specific variants or differentiations between existing customers and prospective customers. Even if you do not practise sophisticated segmenting, small differences in the content can improve the overall results. Just test it!
Frequency (opens and unsubscribes)
If you dispatch a large number of mailings, frequency can be another important criterion – e.g. with the reactivation of users, you may wish to deliberately create a “break” after a phase of inactivity, so that the reactivation mailing will then receive the necessary attention. Test which frequency is the best for your target group and whether different segments would be sensible for the email frequency. The required frequency is also a characteristic, which may be a useful topic for a user survey. In general, the following applies: Less is often more. In some cases, even a user-individualised frequency management would make sense.
Landing pages / websites (conversions)
Often forgotten, but one of the key criteria for success are landing pages, websites and shops after the click in the mailing. Here, the following also applies: Test and optimise! Try different methods to “collect” the customer and provide for a simple process. Do the cross selling references on the order page really lead to more turnover or do they distract from the actual order? Is the subscription form for an event pre-completed with the customer data or does the already known subscriber have to enter everything again?