Mixed methods for evaluating user satisfaction

J. Garcia-Gathright, C. Hosey, B. St. Thomas, B. Carterette, F. Diaz
RecSys 2018
Evaluation is a fundamental part of a recommendation system. Evaluation typically takes one of three forms: (1) smaller lab studies with real users; (2) batch tests with offline collections, judgements, and measures; (3) large-scale controlled experiments (e.g. A/B tests) looking at implicit feedback. But it is rare for the first to inform and influence the latter two; in particular, implicit feedback metrics often have to be continuously revised and updated as assumptions are found to be poorly supported. Mixed methods research enables practitioners to develop robust evaluation metrics by combining strengths of both qualitative and quantitative approaches. In this tutorial, we will show how qualitative research on user behavior provides insight on the relationship between implicit signals and satisfaction. These insights can inform and augment quantitative modeling and analysis for online and offline metrics and evaluation.

bibtex

Copied!
@inproceedings{monitor:recsys-tutorial, year = {2018}, url = {http://doi.acm.org/10.1145/3240323.3241622}, title = {Mixed Methods for Evaluating User Satisfaction}, series = {RecSys '18}, publisher = {ACM}, pages = {541--542}, numpages = {2}, location = {Vancouver, British Columbia, Canada}, isbn = {978-1-4503-5901-6}, doi = {10.1145/3240323.3241622}, booktitle = {Proceedings of the 12th ACM Conference on Recommender Systems}, author = {Garcia-Gathright, Jean and Hosey, Christine and Thomas, Brian St. and Carterette, Ben and Diaz, Fernando}, address = {New York, NY, USA}, acmid = {3241622} }