Image-Based Mapping and Semantic Segmentation for Depiction of Coral Reef Community Structure
Image-Based Mapping and Semantic Segmentation for Depiction of Coral Reef Community Structure
Abstract:
The natural, social, and economic value of coral reefs positions them as one of the most intriguing ecosystems on the planet. Biodiversity is a major feature of these systems, however, the composition of species in the reef community varies over many spatial and temporal scales. Hence, the systematic research of coral reef community structure is limited by the available technologies. To address this shortcoming, we developed a method that is swift, cost-effective, and non-intrusive, based on underwater photogrammetry and semantic segmentation for automatic benthic classification of image-based maps.
As opposed to previous works, we present a technique for global orthorectification of photomosaics without the aid of navigation sensors or permanent markers, allowing deeper surveys without compromising accuracy. Furthermore, rather than point sampling which is the common method for benthic image analysis, we employ label augmentation and deep learning to predict every pixel in an image with minimal labeling effort. Subsequently, features such as the genus richness, neighbor relations, size-frequency distribution, and percent live cover are automatically extracted from the segmented maps.
This is implemented repeatedly across a depth gradient, to study the distribution patterns and dynamics of Scelarctinian corals in shallow and mesophotic reefs.
Due to the accessibility of generating underwater image datasets, and with the emergence of photogrammetry as a popular method for benthic habitat depiction, such an extensive and automatic image analysis framework is of high interest for research and monitoring groups, Non-Government Organizations, and policymakers. Furthermore, by providing a comprehensive view of the benthos, this methodology can be applied in conservation, management, and restoration as well as in fundamental research, where it enables to test new hypotheses and revisit paradigms regarding the spatial organization and distribution of reef organisms.
As opposed to previous works, we present a technique for global orthorectification of photomosaics without the aid of navigation sensors or permanent markers, allowing deeper surveys without compromising accuracy. Furthermore, rather than point sampling which is the common method for benthic image analysis, we employ label augmentation and deep learning to predict every pixel in an image with minimal labeling effort. Subsequently, features such as the genus richness, neighbor relations, size-frequency distribution, and percent live cover are automatically extracted from the segmented maps.
This is implemented repeatedly across a depth gradient, to study the distribution patterns and dynamics of Scelarctinian corals in shallow and mesophotic reefs.
Due to the accessibility of generating underwater image datasets, and with the emergence of photogrammetry as a popular method for benthic habitat depiction, such an extensive and automatic image analysis framework is of high interest for research and monitoring groups, Non-Government Organizations, and policymakers. Furthermore, by providing a comprehensive view of the benthos, this methodology can be applied in conservation, management, and restoration as well as in fundamental research, where it enables to test new hypotheses and revisit paradigms regarding the spatial organization and distribution of reef organisms.