NeMO-Net – The Neural Multi-Modal Observation & Training Network for Global Coral Reef Assessment
NeMO-Net uses fully convolutional networks based upon ResNet and RefineNet to perform semantic segmentation of remote sensing imagery of shallow marine systems captured by drones, aircraft, and satellites, including WorldView and Sentinel. Deep Laplacian Pyramid Super-Resolution Networks (LapSRN) alongside Domain Adversarial Neural Networks (DANNs) are used to reconstruct high resolution information from low resolution imagery, and to recognize domain-invariant features across datasets from multiple platforms to achieve high classification accuracies, overcoming inter-sensor spatial, spectral and temporal variations.
Finally, we share our online active learning and citizen science platform, which allows users to provide interactive training data for NeMO-Net in 2D and 3D, integrated within a deep learning framework. We present results from the Pacific Islands including Fiji, Guam and Peros Banhos where 24-class classification accuracy exceeds 91%.