Proceedings Article | 15 March 2018
KEYWORDS: Convolutional neural networks, Image analysis, Image classification, Data processing, Visualization, Cancer, High speed imaging, Imaging systems, Computing systems, Image processing
Single-cell classification based on the cell’s visual images, i.e., their phenotypes, can greatly complement genomic-based techniques for anomaly detection, which in turn has the potential for assistance in early cancer diagnosis. A high-speed imaging system is often needed for capturing the individual cell images, and in addition, the process involves big data computation, as we often have a large amount of cells for analysis and classification. Here, we focus on the latter, where we devise a deep convolutional neural network (CNN) and show its efficacy for the task.
Specifically, making use of an asymmetric-detection time-stretch optical microscopy (ATOM) for fast image capture, we obtain datasets of four cell types (MB231, MCF7, THP1, and PBMC) exceeding 900,000 cell images. After preprocessing the data, such as discarding the empty images and adjusting for different experimental conditions, we build an eight-layer network where the first six are, alternately, convolutional and pooling layers, and the last two are full connection layers. We make use of the rectified linear function (ReLU) as the nonlinear activation function and max-pooling in downsampling. We compute the neural network classification by training with 65% of the data, and using the remaining 15% for validation and 20% for test. By comparing with other experimental settings and classification schemes, we find that a high data volume even with lower resolution images can outperform the opposite setting, and CNN is a more reliable scheme than other machine learning algorithms such as k-nearest neighbor and support vector machine, especially with fewer input data.