Journal images represent an important part of the knowledge stored in the medical literature. Figure classification has received much attention as the information of the image types can be used in a variety of contexts to focus image search and filter out unwanted information or ”noise”, for example non–clinical images. A major problem in figure classification is the fact that many figures in the biomedical literature are compound figures and do often contain more than a single figure type. Some journals do separate compound figures into several parts but many do not, thus requiring currently manual separation. In this work, a technique of compound figure separation is proposed and implemented based on systematic detection and analysis of uniform space gaps. The method discussed in this article is evaluated on a dataset of journal figures of the open access literature that was created for the ImageCLEF 2012 benchmark and contains about 3000 compound figures. Automatic tools can easily reach a relatively high accuracy in separating compound figures. To further increase accuracy efforts are needed to improve the detection process as well as to avoid over–separation with powerful analysis strategies. The tools of this article have also been tested on a database of approximately 150’000 compound figures from the biomedical literature, making these images available as separate figures for further image analysis and allowing to filter important information from them.
KEYWORDS: Visualization, Radiology, Image classification, Medical imaging, Computed tomography, Information visualization, Web services, Magnetic resonance imaging, Biomedical optics, Image retrieval
The biomedical literature published regularly has increased strongly in past years and keeping updated even
in narrow domains is difficult. Images represent essential information of their articles and can help to quicker
browse through large volumes of articles in connection with keyword search. Content-based image retrieval is
helping the retrieval of visual content. To facilitate retrieval of visual information, image categorisation can be
an important first step. To represent scientific articles visually, medical images need to be separated from general
images such as flowcharts or graphs to facilitate browsing, as graphs contain little information. Medical modality
classification is a second step to focus search.
The techniques described in this article first classify images into broad categories. In a second step the images
are further classified into the exact medical modalities. The system combines the Scale-Invariant Feature Transform
(SIFT) and density-based clustering (DENCLUE). Visual words are first created globally to differentiate
broad categories and then within each category a new visual vocabulary is created for modality classification.
The results show the difficulties to differentiate between some modalities by visual means alone. On the other
hand the improvement of the accuracy of the two-step approach shows the usefulness of the method. The system
is currently being integrated into the Goldminer image search engine of the ARRS (American Roentgen Ray
Society) as a web service, allowing concentrating image search onto clinically relevant images automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.