The Automation Center has a testing function that lets you make sure that the customer journey you have designed works as expected. You can start to test a program as soon as all its paths are validated, and stop it whenever you want. When you stop a test the program reverts to the state In design.

The testing function applies only to programs which have not yet been launched. Testing live programs can be done using the A/B splitter feature (see below).

Note: Testing is not available for programs which start with an on-import node.

Contents

See also:

Introduction

Here are some answers to common questions on testing programs.

What am I actually testing?

The testing feature is designed to test wether the program behaves as expected and is not designed to test if it has the desired effect on the contacts that pass through it. In otherwords, your testing KPIs are more functional than strategic.

What you should be looking at is if you get the messages you think you should get from the program, at the right time and after the right trigger.

Who should I use to test the program?

We recommend creating a test segment from contacts within your own organization (e.g. use a segment where the email address should end in @yourcompany.com).

Why should I test a program?

Once you launch a program your options for changing it become somewhat limited. Since you can stop the test, edit the program and test again as many times as you like, this is a good way to identify weak points and improve them before you go live.

Which programs can I test?

Only programs which start with a transactional entry point (e.g. data change or form submit) can currently be tested.

How does testing affect reporting?

Contacts passing through the program during the test do not show up in the program summary. On the other hand, all messages sent, opened and clicked will show up in the respective email reporting pages. This is because your program may rely on this data, so we need to make sure that it is recorded even during the test.

As long as you test with a small segment, your test responses should not have any significant impact on program reporting once the program has been live for a few days.

Testing a program before launch

When you test a program you must select a test segment. Only email addresses in this test segment will be processed by the program during the test.

You can now either use the entire segment, or individual contacts in it, to test your programs by triggering the entry criteria.

Please note that during testing a maximum of 50 contacts are allowed to pass the entry node of your programs, and contacts above this threshold will not be processed. We set this limit to prevent you from unintentionally sending test emails to a large contact list.

However, this limit only applies to the given testing session so if you stop the test and then start testing again, the counter will be reset for your new testing session. And you can always check if your test program has reached the limit by looking at the contact counter displayed above the second node of your program in testing mode.

Testing individual paths in a program

You can test individual paths in a program by using the A/B Splitter node (see below). With this feature you can split any path and assign each of the test paths a % value. Contacts proceeding along the main path will be randomly distributed along the test paths according to these % values. You can add new paths and change the % value of existing paths at any time. However, you cannot delete a test path once contacts have passed along it.

Once enough contacts have participated in your test and you are confident that you have an optimal choice, simply reduce the % values of the other paths to 0 and all future contacts will proceed along the remaining path.

Note: There is no automated selection of success criteria for test paths. You should use filters or the trends analysis to determine which path is providing the best results for your program. To test different emails you must create a new email for each path. You cannot test different versions of the same email.

Testing live programs

Once a program has been launched you can only A/B test individual paths (see below). Any other changes you make can fundamentally alter the nature of the program and make before and after comparisions meaningless. They can also adversly affect the experience of contacts already inside the program.

One option is to copy a program, modify and test the copy, and switch over to it once you are happy with the result. Contacts who have already entered the original program will have to proceed through it, but new contacts can enjoy the new version.

If you don’t want to ignore contacts already in a program (e.g. waiting in a timer) then your best option is to pause the program, make the changes you want, then copy the program and test the copy. Once you are happy with the result you can resume the original program with the new workflow.

A/B testing

The A/B Splitter node is a great way to test minor improvements in your program, or to test multiple emails against each other. In this way you can continually experiment with new ideas and keep optimizing your strategies and improving your customer journey.

You decide how big your test groups are, and how big your control group is, by assigning percentages to each splitter node. At the end of your test you increase the preferred path to 100% and reduce the others to 0%.

When you feel that you have tested enough and want to choose one path over the other, you should consider one final time if the results are statistically meaningful before you make a decision. The key questions to bear in mind are:

  • Was the sample group large enough?
  • Are the differences between the various paths really significant (i.e. would you get the same result 19 times in 20 similar tests)?

About how we assign contacts to paths

You might notice at first that the numbers of contacts passing through each splitter do not exactly correspond to their respective percentages. You’ll be happy to know that there is a very good reason for this…

For statistical methods to work well, we need to make sure that we eliminate any effects that could skew the results. For batch emails this is easy – we simply divide the launch list randomly between the paths. With an Automation Center program it is a bit more complicated, since contacts are passing through one by one, and we do not know beforehand how many contacts will pass through the nodes before the test ends.

Because of this, the only way we can make sure that we don’t skew the results is by randomly assigning each individual contact to one of the paths according to their relative probability. And probability being what it is, it takes a while before the distribution begins to settle down into a stable pattern. It may take several thousand contacts to pass through before the differences become too small to notice. So be patient, wait until your test is stable, and rest happy in the knowledge that your A/B tests are scientifically valid.