The Value Control reports for email campaigns are available in the Analysis menu.
Note: This feature is available to Smart Insight customers only.
The Value Control page gives you a summary of all the campaign evaluations over the past 30 days and lists all the campaigns that have been launched using this feature.
In the last 30 days
This section presents a summary of the effectiveness of your recent campaigns, as calculated by Value Control.
The figures represent the following metrics:
- added revenue – You would have missed out on this much revenue had the campaigns not been sent.
- additional buyers – These contacts would not have made purchases had the campaigns not been sent.
- additional engagements – These website visits would not have been made had the campaigns not been sent.
- campaigns evaluated – This number of campaigns were evaluated.
- proven value – This many of the evaluated campaigns showed proven additional revenue results with a confidence level of 95% or higher.
This table lists all the campaigns evaluated in the last 30 days.
Each campaign is shown with the size of the test and control groups and the comparisons for the four main metrics:
- Purchase rate – The % of contacts who made a purchase.
- Revenue per purchase – The average order value of a purchase from that group.
- Revenue per person – The average revenue spread over all contacts in the group.
- Engagement rate – The % of contacts who visited the website.
In addition to this you can see the revenue uplift per person (how much more each contact spent in the test group) and the Total added revenue that the test group brought (this is the revenue uplift multiplied by the size of the test group).
In the Result column you have a simple visual aid to the campaign effectiveness:
- – A green circle means that all metrics for the test group were better than for the control group.
- – A red circle means that all metrics for the test group were worse than for the control group.
- – A red/green circle means that some metrics were better for the test group than for the control group, and some were worse.
- – A gray circle means the results are not statistically relevant.
- – A gray/green circle means some metrics were better for the test group than the control group, while others were not statistically relevant.
- – A gray/red circle means some metrics were worse for the test group than the control group, while others were not statistically relevant.
The campaign reporting screen for Value Control is split up into three sections:
Here you have the details of when the campaign was launched and how many contacts were in the test and control groups, as well as an overview of the results.
Click the arrow to the left of the campaign title to return to the Email Analysis overview page.
You also have three more controls available in the upprt right corner:
- View campaign report – Open the standard email analysis Results Summary page for that campaign.
- Calculate results for a segment – You can select a segment from the list and Value Control will add the data for that segment to the Detailed Results table (naturally this will only apply to contacts who are in that segment and the on launch list). Please note that this will not refresh the main campaign summary.
- Save as contact list – This creates two contact lists, one for the test group and one for the control group. These will appear on the Contact Lists page with the naming convention ‘campaign_name – Test group‘ and ‘campaign_name – Control group‘.
Here you can see the metrics for the test and controls groups side by side, as well as total figures for the impact that can be attributed to this campaign alone. For each chart the statistical confidence level is indicated, as well as the overall effect (positive or negative).
The difference between the two columns shows how much the campaign has affected your customers’ normal engagement and purchase patterns, and whether or not it was worth sending.
Above each chart you can see the difference between the two groups in absolute values (the % figures are % points).
Value Control also calculates whether or not each comparison is statistically relevant, and shows if the figures are above or below the 95% confidence level threshold.
What does “Statistically relevant” mean?
A key term to understand when working with Value Control is how we determine if the results are statistically relevant. Based on the distribution of underlying data per contact and the variance of the behavior between the contacts, we establish the confidence level of the results.
A confidence level of 95%, for example, means that if you were to measure the effectiveness of the same campaign with 100 similar launch lists, you would expect to get the same result in at least 95 of the evaluations.
If the behavior is very “noisy”, i.e. with high variance in relation to the measured difference, the confidence level will usually be lower, and vice versa. These indicators are there to help you decide whether or not to take action based on these results.
Each time you calculate the results for a specific segment within the launch list, a breakdown of the results for that segment is shown here. In this way you can easily compare the behavior of your different customer lifecycle segments, for example, against the overall total.