This column is sponsored by ESRI
There’s much investment in large global modeling environments, such as Google Earth, Microsoft Virtual Earth, ESRI’s ArcGIS Explorer and NASA’s World Wind. The prevailing wisdom in the marketplace seems to favor multiple competing globes for different purposes. Certainly the amount of data that could be spatially referenced and modeled is daunting, but what if the concept of a digital globe were to be sold on the basis of universal access and interface to all data types?
The idea is that a large global digital model would ingest any data or model format, retain all the intelligence of the format, and become a hub for any type of building data, design data or geospatial data. The hub would also act as a model-creation medium, where visualization details are extracted and analysis tools would help gain intelligence from the combined data. A digital globe as an interoperability hub would resolve quite a few issues of redundant data collection, and would also help to improve our knowledge about our planet by allowing cross-analysis of multidisciplinary data and models.
Getting consensus on such a Utopian concept is a tough thing to imagine, given the competing business interests of the many players involved, but it’s a worthy concept for discussion. The creation of the Digital Earth as originally envisioned certainly didn’t encompass multiple competing Earths.
No Standard Format
Multiple geospatial interoperability approaches exist today that provide somewhat effective solutions to the disparate systems problem. There are intermediate formats and tools that allow for the transfer of information from one system to another, but often key intelligence about the data gets left behind.
If we got to the point of a universal repository of all data types, there wouldn’t be a need to worry about data format. The centralized repository approach would require a certain amount of translation and manipulation in order to provide a base visualization capability. Allowing for the ingest of any data type would provide for more rapid build-out of the model, giving data creators the ability to share what they have without time-consuming efforts to adhere to a specific import format. Such a model would also help to preserve the tools and workflows that created the data to begin with.
The hub approach would require different tools to extract and manipulate meaning from the central model, but wouldn’t provide any barriers for visualization of the model. This way, there’s no standard format, and no lock on the model for any specific type of data. The result would be a model open for viewing by all, but with data extraction, creation and manipulation priced at a premium.
Much of the geospatial and design software business is built around proprietary data formats. There are rather complex data models that have advanced over time that are specific to individual vendor’s toolsets. These data models are tied directly to how the data can be manipulated and analyzed within these specific tools, and provide for standard workflows that add intelligence to raw observations. There’s no need to throw out the idea of individual tools and markets with this idea of a centralized model, as the centralized collection of specific data types would become separate marketplaces for both data creators and vendors with tools for specific data types.
The central model would provide a showcase for individual data capture and manipulation tools, while still maintaining these separate markets. On the data creation side, surveyors could retain a certain amount of ownership of the detailed information that they’ve collected for endeavors that require defensible legal descriptions of property. Designers of buildings and infrastructure would retain a measure of intellectual property that could reward them for any reuse of their designs. Spatial analysts and scientists would have access to rich data stores for the development of algorithms and intelligent models that they could sell as services to decision makers. Software toolmakers would retain hooks into the manipulation of their proprietary data types, and would be incentivized to develop better tools by the creation of a shared market.
The issue of model ownership would certainly be at play, with the need for an outside entity to administer the data amalgamation task and to ensure that multiple interests are maintained. Ideally, digital city models would be owned and operated by individual cities as an extension to current planning and infrastructure management departments. A public/private arrangement for the computing infrastructure could work out well for the creation and maintenance of the computing power to drive the models, similar to the Internet Service Provider (ISP) idea.
At the regional scale, the models would have inputs and maintenance from states and cross-border entities such as watersheds. The combined model would foster greater regional collaboration on issues of common interest such as natural disaster mitigation and management and transportation. The ownership of regional data would require a cooperative approach, but there are many good examples of this approach in the geospatial world already.
Inputs into a combined model at the global scale would best work for large scientific data sets. Environmental, weather, biodiversity and energy data is being collected and analyzed on the global scale already, and such a repository would allow for analysis to draw any inferences between these and other metrics. Scientific purpose would be much easier to justify than politically sensitive details such as economic data and any details related to security.
The promise of integrated models for a holistic approach to land use management is currently hampered by separate modeling environments and separate worlds. Current global visualization offerings are well suited for communicating multidisciplinary geospatial data, but they don’t offer any tools for collaboration and data sharing across toolsets. A model as an interoperability hub would resolve these issues and provide a forum for collectively increasing our knowledge about our complex world.
City and regional models are being built now with increasing fidelity that would benefit from more inputs and a broader scope. The concept of the integrated model could work at this scale, and then be built out over time. Discipline-specific digital worlds are likely a precursor to any large-scale multidisciplinary effort.
The timeframe to build such a complex and integrated model would certainly need to be at least a decade long, and the political complexity of such a global effort may make this concept out of reach. But, why not just one global modeling environment? After all, there’s just one Internet.
Read what Jeff Thurston has to say on this topic here.