Conducting sound measurement studies of the global Internet is inherently difficult. The collected data significantly depends on vantage point(s), sampling strategies, security policies, or measurement populations -- and conclusions drawn from the data can be sensitive to these biases. Crowdsourcing is a promising approach to address these challenges, although the epistemological implications have not yet received substantial attention by the research community. We share our findings from leveraging Amazon's \mturk (\mt) system for three distinct network measurement tasks. We describe our failure to outsource to \mt an execution of a security measurement tool, our subsequent successful integration of a simple yet meaningful measurement \emph{within} a HIT, and finally the successful use of \mt to quickly provide focused small sample sets that could not be obtained easily via alternate means. Finally, we discuss the implications of our experiences for other crowdsourced measurement research.
[PDF]
[BibTeX]
[Presentation Slides]
[ Return to publications ]