PA33A-03:
Collecting, analyzing and assessing big land use data: Results from the cropland capture game

Wednesday, 17 December 2014: 2:10 PM
Carl Salk1, Tobias Sturn1, Steffen Fritz1, Linda M See2, Ian McCallum2, Sabine Fuss1, Christoph Perger2, Martina Duerauer1 and Michael Obersteiner2, (1)IIASA International Institute for Applied Systems Analysis, Laxenburg, Austria, (2)IIASA, Laxenburg, Austria
Abstract:
The International Institute for Applied Systems Analysis (IIASA) has developed a number of tools for assessing the socioeconomic benefit of Earth Observation such as quantifying the monetary benefit of improved land cover information for mitigation policies. Recently, IIASA has been assessing the benefit of an improved global carbon observation system in the GEOCARBON Project.

Because traditional ground-based land cover validation is expensive, IIASA has developed crowdsourcing projects such as Geo-Wiki which to contribute to land-cover validation. A recent activity is the ‘Cropland Capture’ game which can be played in a browser or mobile device. It can be downloaded or played online at http://www.geo-wiki.org/games/croplandcapture/. In the game, players see an image (from a satellite or ground-based camera) and are asked if they see any cropland in it. They can answer “yes”, “no” or “maybe” if they are unsure. The game had over 3,000 players who made about 4,500,000 classifications on 190,000 unique images.

The benefits delivered by crowdsourcing relative to conventional data acquisition depends critically on the quality of the data received. Players’ rating quality was compared by assessing their agreement with the crowd, consistency on images rated more than once, and agreement with expert validators. These metrics were compared with one another and with potential predictors of user quality: the total number of images rated by a player, and their professional background in land-cover science.

Individual users’ agreement with the crowd and self-agreement were highly positively correlated. The frequency of admitting uncertainty about an image was a good measure of user caution, showing a negative relationship with self-contradiction rate. Many users were more reliable in either identifying cropland or non-cropland, and these two skills were uncorrelated. Overall, user reliability increased with number of images rated, although among the top decile of users, this trend was reversed. Surprisingly, professional background had little influence on quality of ratings. We explore implications of these results for assessing potential benefits of user contributed data in the context of differential user quality and compare this with conventional data collection methods.