Entity resolution is an important area of research with a wide range of applications. In this paper we present a framework
for developing a dynamic entity profile that is constructs as matching entity records are discovered. The proposed
framework utilizes a fuzzy rule base that can match entities with a given error rate. A genetic algorithm is used to
optimize an initial population of random fuzzy rule bases using a set of labeled training data. This approach
demonstrated an F-score performance of 84% on a held out test set. The profiles that were linked demonstrated a
configurable fitness measure to emphasis different search properties (precision or recall).
The approach used for entity resolution in this framework can be extended to other applications, such as, searching for
similar video files. Spatial and temporal attributes can be extracted from the video and an optimal fuzzy rule base can be
evolved.
Humans have a general understanding about their environment. We possess a sense of distinction between what is consistent and inconsistent about the environment based on our prior experience. Any aspect of the scene that does not fit into this definition of normalcy tends to be classified as an inconsistent event, also referred to as novel event. An example of this is a casual observer standing over a bridge on a freeway, tracking vehicle traffic, where the vehicles traveling at or around the same speed limit are generally ignored and a vehicle traveling at a much higher (or lower)
speed is subject to one's immediate attention. In this paper, we present a computational learning based framework for novelty detection on video sequences. The framework extracts low-level features from scenes, based on the focus of attention theory and combines unsupervised learning with habituation theory for learning these features. The paper presents results from our experiments on natural video streams for identifying novelty in velocity of moving objects and static changes in the scene.
With the advent of Computer Radiographs(CR) and Digital Radiographs(DR), image understanding and classification in medical image databases have attracted considerable attention. In this paper, we propose a knowledge-based image understanding and classification system for medical image databases. An object-oriented knowledge model has been introduced and the idea that content features of medical images must hierarchically match to the related knowledge model is used. As a result of finding the best match model, the input image can be classified. The implementation of the system includes three stages. The first stage focuses on the match of the coarse pattern of the model class and has three steps: image preprocessing, feature extraction, and neural network classification. Once the coarse shape classification is done, a small set of plausible model candidates are then employed for a detailed match in the second stage. Its match outputs imply the result models might be contained in the processed images. Finally, an evaluation strategy is used to further confirm the results. The performance of the system has been tested on different types of digital radiographs, including pelvis, ankle, elbow and etc. The experimental results suggest that the system prototype is applicable and robust, and the accuracy of the system is near 70% in our image databases.
The object-oriented knowledge representation is considered as a natural and effective approach. Nevertheless, the use of object-oriented within complex image analysis has not undergone a rapid growth as it happened in other fields. We argue that one of the major problems comes from the difficulty of conceiving a comprehensive framework for coping with the different abstraction levels and the vision task operations. With the goal to overcome such a drawback, we present a new knowledge model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common properties from different types of medical images by using three attribute parts: description, component, and semantic graph, and also specifies its actions to schedule the detection procedure, properly deform the shape of model components to match the corresponding anatomies in images, select the best match candidates, and verify combination graphs from detected candidates with the semantic graph defined in the model. The performance of the proposed model has been tested on pelvis digital radiographs. Initial results are encouraging.
This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.
Identification of anatomical structure boundaries in radiographs is a necessary step for detecting abnormalities. The aim of this study is to develop a knowledge-based approach to automatically segment the interested structure boundaries in X-ray images. Our method contains four main steps. First, the original gray-level radiograph is segmented into a binary image. Second, the region of interest (ROI) is detected by matching the features extracted from the binary image with a pre-defined anatomical model, and the location of ROI will serve as the landmark for the following search. Third, an anatomical model will be hierarchically applied to the original image. Correlation values and anatomical constraints will be applied to choose the closer match edge candidates. According to the shape of the global model, the best match edge candidates will be selected and connected to generate the boundaries of the interested structure. Finally, active contour model is used to refine the boundaries. The results show the effectiveness and the efficiency of this proposed method.
A multi-resolution unsharp masking (USM) technique is developed for image feature enhancement in digital mammogram images. This technique includes four processing phases: (1) determination of parameters of multi-resolution analysis (MRA) based on the properties of images; (2) multi-resolution decomposition of original images into sub-band images via wavelet transformation with perfect reconstruction filters; (3) modification of sub-band images with adaptive unsharp masking technique; and (4) reconstruction of image from modified sub- band images via inverse wavelet transformation. An adaptive unsharp masking technique is applied to the sub-band images in order to modify the pixel values based on the edge components at various frequency scales. Smoothing and gain factor parameters, employed in the unsharp masking, are determined according to the resolution, frequency, and energy content of the sub-band images. Experimental results show that this technique is able to enhance the contrast of region of interest (microcalcification clusters) in mammogram image.
A multi-stage system with image processing and artificial neural techniques is developed for detection of microcalcification in digital mammogram images. The system consists of (1) preprocessing stage employing box-rim filtering and global thresholding to enhance object-to- background contrast; (2) preliminary selection stage involving body-part identification, morphological erosion, connected component analysis, and suspect region segmentation to select potential microcalcification candidates; and (3) neural network-based pattern classification stage including feature map extraction, pattern recognition neural network processing, and decision-making neural network architecture for accurate determination of true and false positive microcalcification clusters. Microcalcification suspects are captured and stored in 32 by 32 image blocks, after the first two processing stages. A set of radially sampled pixel values is utilized as the feature map to train the neural nets in order to avoid lengthy training time as well as insufficient representation. The first pattern recognition network is trained to recognize true microcalcification and four categories of false positive regions whereas the second decision network is developed to reduce the detection of false positives, hence to increase the detection accuracy. Experimental results show that this system is able to identify true cluster at an accuracy of 93% with 2.9 false positive microcalcifications per image.
Image compression reduces the amount of space necessary to store digital images and allows quick transmission of images to other hospitals, departments, or clinics. However, the degradation of image quality due to compression may not be acceptable to radiologists or it may affect diagnostic results. A preliminary study with small-scale test procedures was conducted using several chest images with common lung diseases and compressed with JPEG and wavelet techniques at various ratios. Twelve board-certified radiologists were recruited to perform two types of experiments. In the first part of the experiment, presence of lung disease on six images was rated by radiologists. Images presented were either uncompressed or compressed at 32:1 or 48:1 compression ratios. In the second part of the experiment, radiologists were asked to make subjective ratings by comparing the image quality of the uncompressed version of an image with the compressed version of the same image, and then judging the acceptability of the compressed image for diagnosis. The second part examined a finer range of compression ratios (8:1, 16:1, 24:1, 32:1, 44:1, and 48:1). In all cases, radiologists were able to make an accurate diagnosis on the given images with little difficulty, but image degradation perceptibility increased as the compression ratio increased. At higher compression ratios, JPEG images were judged to be less acceptable than wavelet-based images, however, radiologists believed that all the images were still acceptable for diagnosis. Results of this study will be used for later comparison with large-scale studies.
Three compression algorithms were compared by using contrast-detail (CD) analysis. Two phantoms were designed to simulate computed tomography (CT) scans of the head. The first was based on CT scans of a plastic cylinder containing water. The second was formed by combining a CT scan of a head with a scan of the water phantom. The soft tissue of the brainwas replaced by a subimage containing only water. The compression algorithms studied were the full-frame discrete cosine (FDCT) algorithm, the Joint Photographic Experts Group (JPEG) algorithm, and a wavelet algorithm. Both the wavelet and JPEG algorithms affected regions of the image near the boundary of the skull. The FDCT algorithm propagated false edges throughout the region interior to the skull. The wavelet algorithm affected the images less than the other compression algorithms. The presence of the skull especially affected observer performance on the FDCT compressed images. All of the findings demonstrated a flattening of the CD curve for large lesions. The results of a compression study using lossy compression algorithms is dependent on the characteristics ofthe image and the nature of the diagnostic task. Because of the high density bone of the skull, head CT images present a much more difficult compression problem than chest x-rays. We found no significant differences among the CD curves for the tested compression algorithms.
Key Words: Image compression, contrast-detail analysis.
In this paper we propose an image enhancement algorithm that converts raw data from a storage phosphor scanner into a diagnostically optimized image. The spatial segmentation algorithm is applied to extract the body part of a clinical image, and histogram analysis is then used to develop the tonescale for the region of interest. This approach reduces the sensitivity to nonanatomical regions and allows full use of the display dynamic range for image viewing. The algorithms have been tested on clinical images and the results were reviewed by radiologists. Currently, these algorithms are being successfully used in the clinical environment.
Wavelet-based image compression is receiving significant attention, largely because of its potential for good image quality at low bit rates. In medical applications, low bit rate coding may not be the primary concern, and it is not obvious that wavelet techniques are significantly superior to more established techniques at higher quality levels. In this work we present a straightforward comparison between a wavelet decomposition and the well-known discrete cosine transform decomposition (as used in the JPEG compression standard), using comparable quantization and encoding strategies to isolate fundamental differences between the two methods. Our focus is on the compression of single-frame, monochrome images taken from several common modalities (chest and bone x-rays and mammograms).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.