Traditional content-based image retrieval methods based on learning from examples analyze and attempt to understand
high-level semantics of an image as a whole. They typically apply certain case-based reasoning technique to interpret
and retrieve images through measuring the semantic similarity or relatedness between example images and search
candidate images. The drawback of such a traditional content-based image retrieval paradigm is that the summation of
imagery contents in an image tends to lead to tremendous variation from image to image. Hence, semantically related
images may only exhibit a small pocket of common elements, if at all. Such variability in image visual composition
poses great challenges to content-based image retrieval methods that operate at the granularity of entire images. In this
study, we explore a new content-based image retrieval algorithm that mines visual patterns of finer granularities inside a
whole image to identify visual instances which can more reliably and generically represent a given search concept. We
performed preliminary experiments to validate our new idea for content-based image retrieval and obtained very
encouraging results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.