Investigation of 3D, 4D, and hybrid automated methods to expedite high-precision coral segmentation

Hugh Runyan1, Vid Petrovic2, Nicole Pedersen1, Clinton Brook Edwards1, Stuart A Sandin1 and Falko Kuester2, (1)Scripps Institution of Oceanography, UC San Diego, La Jolla, CA, United States, (2)California Institute for Telecommunications and Information Technology, UC San Diego, La Jolla, CA, United States
Abstract:
It is now possible to create photorealistic digital 3D and 4D models of benthic environments—to literally “bring the reef back to the lab.” The creation of these digital surrogates promises to transform the study of underwater ecosystems: to democratize data acquisition and sharing beyond what satellite and aerial imagery have contributed to terrestrial ecology. In a grass-roots effort, our team has surveyed nearly 2,000 coral reef plots across the world, using underwater photogrammetry techniques capturing geospatial and temporal information, resulting in a massive data repository as well as human and computational resources to store, transfer, and interpret these data assets. Manually semantically segmentating (defined as labeling per-point, per-genera) a single square meter requires an hour; at this rate, an expert working full time would need at least a century to map every organism in the existing database, which will continue to rapidly expand. A decade of field expeditions to a single Pacific atoll comprise ~20% of our stored data; orders of magnitude more would be required to provide coral researchers with the globally-comprehensive database we desire. While emerging machine learning techniques hold great promise for expedited processing of raw field data, a broad range of challenges remain. We will compare 2D segmentation neural networks, their 3D extension, and present a novel hybrid approach, operating with both high-resolution images and derivative 3D/4D models.