Convolutional neural networks, the state of the art for image segmentation, have been successfully applied to histology images by many computational researchers. However, the translatability of this technology to clinicians and biological researchers is limited due to the complex and undeveloped user interface of the code, as well as the extensive computer setup required. We have developed a plugin for segmentation of whole slide images (WSIs) with an easy to use graphical user interface. This plugin runs a state-of-the-art convolutional neural network for segmentation of WSIs in the cloud. Our plugin is built on the open source tool HistomicsTK by Kitware Inc. (Clifton Park, NY), which provides remote data management and viewing abilities for WSI datasets. The ability to access this tool over the internet will facilitate widespread use by computational non-experts. Users can easily upload slides to a server where our plugin is installed and perform the segmentation analysis remotely. This plugin is open source and once trained, has the ability to be applied to the segmentation of any pathological structure. For a proof of concept, we have trained it to segment glomeruli from renal tissue images, demonstrating it on holdout tissue slides.
Histologic examination of interstitial fibrosis and tubular atrophy (IFTA) is critical to determine the extent of irreversible kidney injury in renal disease. The current clinical standard involves pathologist’s visual assessment of IFTA, which is prone to inter-observer variability. To address this diagnostic variability, we designed two case studies (CSs), including seven pathologists, using HistomicsTK- a distributed system developed by Kitware Inc. (Clifton Park, NY). Twenty-five whole slide images (WSIs) were classified into a training set of 21 and a validation set of four. The training set was composed of seven unique subsets, each provided to an individual pathologist along with four common WSIs from the validation set. In CS 1, all pathologists individually annotated IFTA in their respective slides. These annotations were then used to train a deep learning algorithm to computationally segment IFTA. In CS 2, manual and computational annotations from CS 1 were first reviewed by the annotators to improve concordance of IFTA annotation. Both the manual and computational annotation processes were then repeated as in CS1. The inter-observer concordance in the validation set was measured by Krippendorff’s alpha (KA). The KA for the seven pathologists in CS1 was 0.62 with CI [0.57, 0.67], and after reviewing each other’s annotations in CS2, 0.66 with CI [0.60, 0.72]. The respective CS1 and CS2 KA were 0.58 with CI [0.52, 0.64] and 0.63 with CI [0.56, 0.69] when including the deep learner as an eighth annotator. These results suggest that our designed annotation framework refines agreement of spatial annotation of IFTA and demonstrates a human-AI approach to significantly improve the development of computational models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.