Leveraging Unsupervised Methods to Train Image Classification Networks with Fewer Labelled Inputs: Application to Species Classification of Phytoplankton Imagery from an Imaging Flow Cytometer

Emmett Culhane, Yale University, New Haven, CT, United States, Nils Haentjens, University of Maine, Orono, ME, United States, Alison P Chase, University of Maine, School of Marine Science, Orono, ME, United States, Peter Gaube, Applied Physics Laboratory at the University of Washington, Air-Sea Interaction and Remote Sensing, Seattle, WA, United States and Jason Morrill, University of Maine, Orono, United States
In-situ imaging technologies like imaging flow cytometers are becoming essential tools for the study of marine populations. This has created a corresponding need to automate image identification methods. Supervised models like convolutional networks have shown great promise in this context, though often require many labeled inputs to achieve state of the art performance. The relatively slow process of manual image annotation can therefore create a bottleneck in the analysis of these data. Here we evaluate the capability of several recent unsupervised and semi-supervised algorithms to reduce the number of labeled inputs required to train high performance image classification models. This question is explored through the use of deep convolutional neural networks to classify images of phytoplankton taken by an imaging flow cytometer. Towards this end, we demonstrate how non-linear dimensionality reduction techniques based on heat-diffusion processes can be used to denoise labeled inputs and improve data quality. Further, we consider two semi-supervised network architectures that take advantage of unlabeled inputs through consistency training. These model architectures allow for information to propagate smoothly from labeled to unlabeled examples during model training through the addition of an unsupervised consistency loss penalty. This work highlights the potential of semi-supervised and unsupervised algorithms to reduce the number of labeled examples needed to train image classification models. In particular, we address the application of unsupervised methods to both improve the quality of labeled data and to leverage latent information in unlabeled data through consistency training.