Skip to content
Experiences developing an operational workflow for large-scale instance and semantic segmentation of remote sensing imagery using CNNs

Jan Schindler1, Brent Martin2, Alexander Amies2, Ben Jolly3 and David Pairman2

1 Manaaki Whenua - Landcare Research, Wellington
2 Manaaki Whenua - Landcare Research, Lincoln
3 Manaaki Whenua - Landcare Research, Palmerston North

Convolutional Neural Network (CNN) architectures offer unprecedented opportunities for improved
mapping of land cover, change and individual features in satellite and aerial imagery compared to
traditional machine learning algorithms (e.g., Random Forests). We have utilized state-of-the art CNN
architectures for instance segmentation of objects using MaskRCNN and semantic segmentation of land
cover using ResUNet and have developed an operational workflow for mapping exercises at the local-,
regional- and national scale.

While a plethora of open-source Deep Learning frameworks exist, they usually target non-spatial
imagery, small case studies or reference datasets and cannot be used on specific data archives and
HPC infrastructures without major rework.

We have developed a fit-for-purpose reusable software pipeline that both runs on the NeSI HPC and on
local PCs with GPU support. It allows for flexible training from geospatial layers and large volumes of
remote sensing imagery stored in the efficient KEA-file format, keeps inode usage at a minimum, and
includes cross-validation visualization and tile-based predictions routines.

The pipeline evolved while researching these techniques for a range of environmental applications,
including mapping of urban trees and forests, forest destocking, landslides and historic urban green
spaces. We discuss our experience with taking this agile approach.

Download presentation as PDF