Hi
I've done various A/B tests but I'd like to know if/how/where can I determine what defines a winner?
e.g. with an email the split test may be on which version gets a better open or click through rate. But what % difference between A and B decides whether there is in fact a 'winner' or if the results are inconclusive? 1%, 2% 5%, 10%?
If i'm sending out to 20k users and one email has a 2% high open rate i'd take that as a winner in my opinion, but from my experience to date the system seems to require an open rate of +5% or thereabouts in order to be 'conclusive'. Which in my experience is rare to get such a dramatic margin and this renders the test somewhat pointless.
Is this a setting we can manually update in the backend of the system so we can prescribe what margin constitutes a winner? It really should be.
Hi Dan,
Any chance you heard back from Microsoft. I'm having the same issue and am hoping you were able to resolve your issue.
Hi,
Opening a ticket is a wise choice because I didn't find a clear description of the rate gap and don't have that large number of contacts to reproduce your question.
Assuming this gap is 5%, the number of times is less than once when there are less than 20 contacts for a given version.
So it is difficult to know exactly how the winner is calculated based on a large number of contacts if it is not declared in the documentation.
Best Regards,
Nya
Hi Nya
"It should be that as long as a version has a higher click-through rate or open rate, it should be a winner." This isn't accurate. I've had campaigns where the difference between A and B was 1.5%, 2.5% 3% yet the result was still given as 'inconclusive'.
I guess this might be a bug so I'll have to log a ticket with Microsoft, it just seems a lot of hassle for what should be something clearly documented somewhere.
Hi,
Do you mean that you produced an A/B test and when the Test duration you set is reached, even if one version has a 2% higher open rate than the other, the test result is still determined to be inconclusive?
As the official documentation says, “In each case, the winner is the version that produced the most clicks or opens as a proportion of the total number of times that version was sent.”
It doesn't mention a definite value to determine winner, so it should be that as long as a version has a higher click-through rate or open rate, it should be a winner.
If this data discrepancy does exist and affects you significantly, it is recommended that you open a ticket to MS to resolve the issue.
Of course, you can also choose which version to keep manually based on the test result:
Email A/B testing (Dynamics 365 Marketing) | Microsoft Docs
If this helped you, I'd appreciate it if you'd mark this as a Verified Answer, which may in turn help others as well.
Best Regards,
Nya
André Arnaud de Cal...
291,971
Super User 2025 Season 1
Martin Dráb
230,846
Most Valuable Professional
nmaenpaa
101,156