[cs-talks] UPDATE- CS Upcoming Seminars: IVC (Thurs)
fgreen1 at bu.edu
Tue Feb 16 11:21:34 EST 2016
Human Evaluation and Refinement of Search Clusters
Amy Zhang, MIT
Thursday, February 18, 2016 at 2pm in MCS 148
Abstract: Research has demonstrated that clustering search results into coherent, topical clusters can aid in exploration and discovery. Yet clusters generated by an algorithm for this purpose are often of poor quality and do not satisfy end users. As a result, experts need to manually evaluate and refine the clustered results for each search query, a process that does not scale to large numbers of search queries.
In this work, we investigate using crowd-based human evaluation to inspect, evaluate, and improve clusters to ensure high-quality clustered results at scale. We introduce a workflow that takes as input several different clustered results produced by a collection of algorithms and uses the crowd to assess quality as well as spot and fix problems.
We implement this workflow for a set of top search queries and apps taken from one of the world's largest app distribution platforms. Evaluations of the system demonstrate that our workflow is effective at reproducing the evaluation of expert judges and also improves clusters in a way that agrees with experts and crowds alike.
Bio: I am a Ph.D. student in Computer Science at MIT CSAIL, advised by Prof. David Karger, and a member of the Haystack Group and UID Group at MIT. My research areas are in social computing, HCI, and computational social science. Specifically I am interested in how to incorporate AI and crowdsourcing towards online discussion interfaces. Prior to MIT, I worked as a software engineer, completed a Masters in CS as a Gates Scholar at University of Cambridge, and a Bachelor in CS from Rutgers University.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cs-talks