The era of “crowd-sourced” geospatial data, or volunteered geographic information (VGI) as Michael Goodchild describes it, is raising the issue of the Internet ranking of the quality and relevancy of this data. This issue was brought up in the Edge:Mapping session from this year’s O’Reilly Web 2.0 Conference.
In an effort to catch up on some reading and viewing, I just watched this session that brought together executives from Microsoft, Google and TeleAtlas, and was moderated by Brady Forrest. The issue of ranking geodata is much more complex than page ranking, which simply takes into account the number of clicks and links from other websites.
In the geo realm, is it possible to have algorithms determine the quality and relevancy of data that represents reality? A wiki approach is being attempted by some sites, see Open Street Map. There’s also a number of sites that allow for massive geocoding of content without arbitration, see Flickr. The long-standing commercial approach is to have an organization (TeleAtlas, NAVTEQ) both collect and combine data from various sources. Google Maps recently added user profile, allowing for a type of popularity ranking of an individual’s customized maps.
It’s clear that 2008 will be a year of map data explosion, particularly with the growing ubiquity of personal navigation devices (PND). I’m sure we’ll see more approaches to the geodata ranking problem from the major players as they continue to build out their digital realities.