In this work, we explore image-to-image translation using Conditional Generative Adversarial Networks (cGAN) to convert digital tissue images from the brightfield to the immunofluorescence (IF) domain. A dataset of 149 tissue microarray (TMA) cores were stained using a multiplexed IF system for DAPI, Ribosomal S6, and NaKATPase. These TMA cores were subsequently stained with hematoxylin and eosin (H&E) and digitally scanned. Using registered pairs of H&E and IF, a cGAN was trained to translate from the H&E to the IF domain for DAPI, Ribosomal S6, and NaKATPase markers. This classifier was then evaluated by translating a set of holdout H&E samples, both from the original TMA dataset as well as an independent prostate cancer H&E dataset (for which we do not have IF probes). The cGAN was evaluated quantitatively for our multiplexed TMA samples and qualitatively for the independent H&E dataset. We found that for the DAPI channel, the cGAN is able to produce accurate samples but is unable to replicate the subtle pixel intensity differences that characterize boundaries between nuclei. For the NaKATPase and Ribosomal S6 channels, the cGAN over segmented extracellular matrix regions. On the holdout open-source H&E stained prostate tissue dataset, the cGAN produced qualitatively acceptable results.
Hematoxylin and Eosin (H&E) is a widely-used stain for diagnosis and prognosis in clinical pathology; it is a non-specific stain, binding to all cell types. Immunofluorescent (IF) staining is highly specific, binding only to targeted proteins in a sample to identify specific cellular and sub-cellular structures. IF images are costlier and more technically difficult compared with H&E, so are rarely used in routine clinical workup, but can be used for identification of diagnostically significant cell types. In previous work, we used registered IF and H&E images to generate class labels for training a deep learning H&E segmentation algorithm. In this work, we leverage this dataset to train a Conditional Generative Adversarial Network (cGAN) to generate realistic-looking H&E images from IF images stained for DAPI and ribosomal S6. Using these generated images, we trained a semantic segmentation algorithm to identify nuclei, cytoplasm, and membrane classes by thresholding the original IF stains for use as class labels on the generated H&E. The trained classifier was then used to segment a holdout dataset of real H&E images. We found that the semantic segmentation models trained on the generated H&E images (Dice score: 0.539) performed similarly to models trained on real H&E (Dice score: 0.503), suggesting that cGAN generated samples can be used as a viable training set for deep learning models that are intended to be applied on real H&E data.
Manual annotation of Hematoxylin and Eosin (H&E) stained tissue images for deep learning classification is difficult, time consuming, and error-prone particularly for multi-class and rare-class problems. Chemical probes in immunohistochemistry (IHC) or immunofluorescence (IF) can automatically tag cellular structures; however, chemical labeling is difficult to use in training a deep classifier for H&E images (e.g. through serial sectioning and registration). In this work, we leverage the novel Multiplexed Immuno-Fluorescencent (MxIF) microscopy method developed by General Electric Global Research Center (GE GRC) which allows sequential, stain-image-bleach (SSB) application of protein markers on formalin-fixed, paraffin-embedded(FFPE) samples followed by traditional H&E staining to build chemically-annotated tissue maps of nuclei, cytoplasm, and cell membranes. This allows us to automate the creation of ground truth class-label maps for training an H&E-based tissue classifier. In this study, a tissue microarray consisting of 149 breast cancer and normal tissue cores were stained using MxIF for our three analytes, followed by traditional H&E staining. The MxIF stains for each TMA core were combined to create a “Virtual H&E” image, which is registered with the corresponding real H&E images. Each MxIF stained spot was segmented to obtain a class-label map for each analyte, which was then applied to the real H&E image to build a dataset consisting of the three analytes. A convolutional neural network (CNN) was then trained to classify this dataset. This system achieved an overall accuracy of 70%, suggesting that the MxIF system can provide useful labels for identifying hard to distinguish structures. A U-net was also trained to generate pseudo-IF stains from H&E and resulted in similar results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.