A/B testing is a strategy that determines how a change in one variable impacts the audience or user engagement.
Usually, in ML modelling different models are compared based on their scoring results on offline data and the best among them is selected. But the model which is working fine on offline data may not work well enough on real data. These models may face issues like idea drift or covariate drift.
A/B Testing allows ML models to be tested on live data rather than offline data. If any drift is found, models can be retrained and fitted on new data. Hence, A/B testing can be used as an optimization strategy to improve machine learning modelling.