It would be useful to have some build-in statistical analysis of the A/B test results. Depending on the number of views and the magnitude of the differences between different options, the result of an A/B test may be statistically significant or not. It would not take much to help people continue the test or determine which option is better based on the p-value of the statistical test.
I am putting this request in analytics but it applies to emails, CMS, CTA, anywhere A/B testing is offered. A little build-in statistical analysis would make the A/B testing a more rigorous.
It would be great if the winner indicator (green dot) on a metric got a star on it or an S inside to indicate when the difference was stastically significant. A 90% confidence would likely work for most users but would be great if there was an account setting to choose 90/95/99
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.