virtual microscopy
Deep learning has great potential for decision support and automation in clinical and diagnostic pathology. However, obtaining large amounts of training data is hampered by the need for manual annotation by experienced pathologists which is labour-intensive, tedious and expensive. At the same time, computational simulation of tissues can generate fully controllable and self-annotated data, and is increasingly fast and cheap. Combining the two, we train deep convolutional generative adversarial networks (GANs) to perform image-to-image translation between images of simulated tissues and microscopy image of real tissues.
Specifically, we train conditional GANs (cGANs) or, when paired images are unavailable, CycleGANs, in which a generative network (Unet) learns to translate binary masks into artificial microscopy images while a binary CNN classifier learns to discriminate translated from real microscopy images. Once trained, this allows unlimited numbers of controlled and accurately annotated microscopy images to be created automatically using our open-source simulation environment Morpheus.
These artificial microscopy data are used as training data to bootstrap machine learning algorithms in order to automate and support clinical pathologists.
collaborators
- Patrick Brosämle, MSc student, Department of Computer Science, TU Dresden.
- Center for Systems Biology Dresden