Modern robotics and imaging technology enable rapid collection of large coral reef image surveys. However, the subsequent image-annotation required to extract data for ecological analysis is time-consuming and expensive, creating a ‘manual annotation bottleneck’ between collected imagery and required data. We present an algorithmic framework and a collaborative platform to address this bottleneck.
The CoralNet (coralnet.ucsd.edu) image annotation platform has been freely available for two years. During this time, more than 170,000 coral reef survey images have been uploaded as part of 288 different reef surveys, and annotated with over four million point annotations by 347 coral reef experts. Through the CoralNet interactive annotation tool, users already leverage automated image analysis to reduce the manual annotation work by 50%.
Using this wealth of data, we have developed the next generation automated annotation methods, using deep neural networks with 138 million parameters organized in 16 convolutional layers. Using these networks, we can further reduce the annotation burden so that, on average, only 20% of the annotation work remains for the human annotator. Remarkably, for surveys conducted with sufficient image quality, fully automated annotation is possible with no reduction of annotation quality compared manual annotation.
As more data is uploaded and verified, the CoralNet vision back-end will continue to evolve, creating a positive feedback loop to encourage further collaboration and enabling rapid and accurate image based reef surveys.
About the Speaker
Dr Oscar Beijbom is undertaking a joint post-doc at the Global Change Institute and the Berkeley Vision and Learning Center, where he works on automated quantification of scientific image-data using deep learning. Oscar’s work is jointly supervised by Trevor Darrell at UC Berkeley and Ove Hoegh-Guldberg at The University of Queensland. As part of his work with the Global Change Institute, Oscar is developing CoralNet, deploying deep convolutional neural networks for automated annotation of the XL Catlin Seaview Survey images.
Before this, Oscar studied computer vision and machine learning at UCSD under David Kriegman andSerge Belongie, and engineering physics at Lund University under Kalle Åström.
Outside academia, he was lead developer at Hövding where he created the algorithmic framework and hardware design for their invisible bicycle helmet. He has also worked on automated dietary logging systems for consumer applications and focusing algorithms for image-based cell analysis.