ofTutorial (full day)
Online Evaluation for Effective Web Service Development
Nowadays, the development of most leading web services and software products, in general, is guided by data-driven decisions that are based on online evaluation which qualifies and quantifies the steady stream of web service updates. Online evaluation is widely used in modern Internet companies (like search engines, social networks, media providers, and online retailers) in permanent manner and on a large scale (reportedly, up to 1000 run experiments per day in 2015 by Google).
The number of smaller companies that use A/B testing in the development cycle of their products grows as well. The development of such services strongly depends on the quality of the experimentation platforms. In this tutorial, we overview the state-of-the-art methods underlying the everyday evaluation pipelines. At the beginning of this tutorial, we will make an introduction to online evaluation and will give basic knowledge from mathematical statistics. This will be followed by foundations of main evaluation methods such as A/B testing, interleaving, and observational studies. Then, we will share rich industrial experiences on constructing of an experimentation pipeline and evaluation metrics (emphasizing best practices and common pitfalls).
A large part of our tutorial will be devoted to modern and state-of-the-art techniques (including the ones based on machine learning) that allow to conduct online experimentation efficiently. Finally, we will point out open research questions and current challenges that should be interesting for research scientists. We invite software engineers, designers, analysts, and managers of web services and software products, as well as beginners, advanced specialists, and researchers to learn how to make web service development effectively data-driven.