David Thau, senior developer advocate for Google Earth Engine, gave the keynote speech at this morning’s ASPRS Annual Meeting. The talk began with a discussion of Stewart Brand’s campaign to see a picture of the whole earth from space, which has been said to have kicked off earth day, the ecology movement, global politics, and a greater awareness of remote sensing. Google believes in the power of this greater earth awareness, with the aim of bringing geographic literacy to everyone.
Thau discussed the many advancements that have taken place along a full range of remote sensing technologies.
With image acquisition we’re seeing gigapixel images, such as the GigaPan camera, and we’re starting to see terrapixel images, where you can zoom into great detail. This advance comes from the combination of the resolution of the camera, many images at different zoom levels, and computer stitching that combines all the scenes and allows for full exploration. It has been expensive to capture remote sensing imagery, but now today with drones, kites and balloons more people are able to capture and analyze imagery. While this low-end technology is democratizing remote sensing, there is a lot of work taking place at the high end of professional capture as well. The Carnegie Airborne Observatory is at the forefront of high-end imagery with a combination of sensors to capture and reveal new levels of forestry information.
Image processing has come a long way as well, but faces challenges because there are more and more data coming every day. Processing via the cloud is the focus of Google Earth Engine, a remote sensing data analysis platform, with data from multiple satellites and sensors. At this point they have 1.3 million Landsat 5 and 7 scenes, MODIS, SRTM, and processed data that has been shared from researchers via an open platform. The platform provides on-demand processing capacity, and the ability to batch-mode your analysis.
Earth Engine Classify, is a cloud-based resource that Thau demonstrated, where you can add different data sets from the various sensors, and overlay them on the scene with various classification algorithms that allow you to classify for such things as forest, non-forest, etc. The Classifier works on the analysis on the fly, analyzing just the imagery that appears on the screen. With access to multiple processes via the cloud, more people are able to do sophisticated imagery analysis at a large scale, and in countries where computing and bandwidth resources are a constraint.
With the open cloud computing platform, people can see the algorithms and data that produced the results. The data and algorithms are versioned, which allows you to go backward to understand the products that were produced, and swap in newer algorithms to test them and enhance the results.
With such large datasets, it’s been difficult to visualize, but now with today’s computers and optimized browsers, we’re now able to view rendered imagery on the fly, without a round-trip to a server. Google is now rendering new vectors on the fly, such as 3D models that have shadows based on the time of day.
Google Earth Engine is working to democratize environmental monitoring to help inform policy, and to expand the communication capabilities of researchers worldwide.