Amit Levi
Written by Amit Levi

Do You Trust Your A/B Test Results? Run and Monitor Them for Continued Success

Product managers often A/B test their innovations, before releasing them for the standard user experience. Whether this is complete redesign of a UI layout or for a specific new log-in mechanism, various product developments undergo A/B testing before roll-out. This way you can prove that the new experience is preferred over the old.

But, have you ever wondered what happens to the A/B test after the test is over?

Monitoring an A/B Test’s Winning Results

Today’s A/B tests have missing elements in the management process. Once an A/B test finishes and the winning variation is implemented as the default variation for the user, the winning results are no longer monitored, assuming that the results are stable.

At Anodot, we are working with some of the top product managers across various industries. From this experience, we’ve learned that many of them are using Anodot not only for monitoring their active A/B tests, but also after the test has completed. They are using Anodot to monitor how the winning variation continues to perform in production.

Mysterious Drop in Performance

When one of Anodot’s customers, for example, ran an A/B test, they validated that they can potentially improve performance results by 22%, then updating the winning feature into their main product branch. In production, the feature was performing as expected for more than 6 months. Then for an unknown reason, performance drastically deteriorated.

Anodot identified the drop in performance and helped identify the root cause. Since another A/B test was released to the main product branch using the same API, an API quota violation occurred. The incident was identified within 12 mins from when it had started, and about 60 minutes later, it was resolved.

Whether running A/B tests or multi-variant tests, Anodot can monitor variations during, and even, after these tests.

footer background