It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Fluorescence microscopy, a key driver for progress in the life sciences, faces limitations due to the microscope's optics, fluorophore chemistry, and photon exposure limits, necessitating trade-offs in imaging speed, resolution, and depth. Here, we introduce MicroSplit, a computational multiplexing technique based on deep learning that allows multiple cellular structures to be imaged in a single fluorescent channel and then unmix them by computational means, allowing faster imaging and reduced photon exposure. We show that MicroSplit efficiently separates up to four superimposed noisy structures into distinct denoised fluorescent image channels. Furthermore, using Variational Splitting Encoder-Decoder (VSE) networks, our approach can sample diverse predictions from a trained posterior of solutions. The diversity of these samples scales with the uncertainty in a given input, allowing us to estimate the true prediction errors by computing the variability between posterior samples. We demonstrate the robustness of MicroSplit across various datasets and noise levels and show its utility to image more, to image faster, and to improve downstream analysis. We provide MicroSplit along with all associated training and evaluation datasets as open resources, enabling life scientists to immediately benefit from the potential of computational multiplexing and thus help accelerate the rate of their scientific discovery process.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
* https://github.com/CAREamics/MicroSplit-reproducibility
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer