Certain pathology workflows, such as classification and grading of prostate adenocarcinoma according to the Gleason grade scheme, stand to gain speed and objectivity by incorporating contemporary digital image analysis methods. We compiled a dataset of 513 high resolution image tiles from primary prostate adenocarcinoma wherein individual glands and stroma were demarcated and graded by hand. With this unique dataset, we tested four Convolutional Neural Network architectures including FCN-8s, two SegNet variants, and multi-scale U-Net for performance in semantic segmentation of high- and low-grade tumors. In a 5-fold cross-validation experiment, the FCN-8s architecture achieved a mIOU of 0.759 and an accuracy of 0.87, while the less complex U-Net architecture achieved a mIOU of 0.738 and accuracy of 0.885. The FCN-8s architecture applied to whole slide images not used for training achieved a mIOU of 0.857 in annotated tumor foci with a multiresolution processing time averaging 11 minutes per slide. The three architectures tested on whole slides all achieved areas under the Receiver Operating Characteristic curve near 1, strongly demonstrating the suitability of semantic segmentation Convolutional Neural Networks for detecting and grading prostate cancer foci in radical prostatectomies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.