Abstract

We introduce a simple mechanism by which a CNN trained to perform semantic segmentation of individual images can be re-trained - with no additional annotations - to improve its performance for segmentation of videos. We put the segmentation CNN in a Siamese setup with shared weights and train both for segmentation accuracy on annotated images and for segmentation similarity on unlabelled consecutive video frames. Our main application is live microscopy imaging of membrane-less organelles where the fluorescent groundtruth for virtual staining can only be acquired for individual frames. The method is directly applicable to other microscopy modalities, as we demonstrate by experiments on the Cell Segmentation Benchmark. Our code is available at https://github.com/kreshuklab/ learning-temporal-consistency.

Competing Interest Statement

The authors have declared no competing interest.

Details

Title
Unsupervised temporal consistency improvement for microscopy video segmentation with Siamese networks
Author
Shabanov, Akhmedkhan; Schichler, Daja; Pape, Constantin; Cuylen-Haering, Sara; Kreshuk, Anna
University/institution
Cold Spring Harbor Laboratory Press
Section
New Results
Publication year
2021
Publication date
Mar 25, 2021
Publisher
Cold Spring Harbor Laboratory Press
Source type
Working Paper
Language of publication
English
ProQuest document ID
2505038126
Copyright
© 2021. This article is published under http://creativecommons.org/licenses/by/4.0/ (“the License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.