Minimizing Live Experiments in Recommender Systems: User Simulation to Evaluate Preference Elicitation Policies

Clicks: 14
ID: 283550
2024
Article Quality & Performance Metrics
Overall Quality Improving Quality
0.0 /100
Combines engagement data with AI-assessed academic quality
AI Quality Assessment
Not analyzed
Abstract
Evaluation of policies in recommender systems typically involves A/B testing using live experiments on real users to assess a new policy's impact on relevant metrics. This ``gold standard'' comes at a high cost, however, in terms of cycle time, user cost, and potential user retention. In developing policies for ``onboarding'' new users, these costs can be especially problematic, since on-boarding occurs only once. In this work, we describe a simulation methodology used to augment (and reduce) the use of live experiments. We illustrate its deployment for the evaluation of ``preference elicitation'' algorithms used to onboard new users of the YouTube Music platform. By developing counterfactually robust user behavior models, and a simulation service that couples such models with production infrastructure, we are able to test new algorithms in a way that reliably predicts their performance on key metrics when deployed live. We describe our domain, our simulation models and platform, results of experiments and deployment, and suggest future steps needed to further realistic simulation as a powerful complement to live experiments.
Reference Key
boutilier2024minimizing Use this key to autocite in the manuscript while using SciMatic Manuscript Manager or Thesis Manager
Authors Chih-Wei Hsu; Martin Mladenov; Ofer Meshi; James Pine; Hubert Pham; Shane Li; Xujian Liang; Anton Polishko; Li Yang; Ben Scheetz; Craig Boutilier
Journal arXiv
Year 2024
DOI DOI not found
URL
Keywords

Citations

No citations found. To add a citation, contact the admin at info@scimatic.org

No comments yet. Be the first to comment on this article.