Counterfeiting digital images through a copy-move forgery is one of the most common ways of manipulating the semantic content of a picture, whereby a portion of the image is copy-pasted elsewhere into the same image. It could happen, however, instead of a digital image only its analog version may be available. Scanned or recaptured (by a digital camera) printed documents are widely used in a number of different scenarios, for example a photo published on a newspaper or a magazine. In this paper, the problem of detecting and localizing copy-move forgeries from a printed picture is focused. The copy-move manipulation is detected by verifying the presence of duplicated patches in the scanned image by using a SIFT-based method, tailored for printed image case. Printing and scanning/recapturing scenario is quite challenging because it involves different kinds of distortions. The goal is to experimentally investigate the requirement set under which reliable copy-move forgery detection is possible. We carry out a series of experiments, to pursue all the different issues involved in this application scenario by considering diverse kinds of print and re-acquisition circumstances. Experimental results point out that forgery detection is still successful though with reduced performances, as expected.
In this paper a multi-user motion capture system is presented, where users work from separate locations and interact in a common virtual environment. The system functions well on low-end personal computers; it implements a natural human/machine interaction due to the complete absence of markers and weak constraints on users' clothes and environment lighting. It is suitable for every-day use, where the great precision reached by complex commercial systems is not the principal requisite.
Broadcasters usually envision two basic applications for
video databases: Live Logging and Posterity Logging. The former
aims at providing effective annotation of video in quasi-real time
and supports extraction of meaningful clips from the live stream;
it is usually performed by assistant producers working at the same
location of the event. The latter provides annotation for later
reuse of video material and is the prerequisite for retrieval by
content from video digital libraries; it is performed by trained
librarians. Both require that annotation is performed, at a great
extent, automatically. Video information structure must encompass
both low-intermediate level video organization and event
relationships that define specific highlights and situations.
Analysis of the visual data of the video stream permits to
extract hints, identify events and detect highlights. All of this
must be supported by a-priori knowledge of the video domain and
effective reasoning engines capable to capture the inherent
semantics of the visual events.
To support widespread deployment and usage of content-based video retrieval (CBVR), definition of simple (i.e. intuitive), yet powerful query interfaces must accompany the ongoing investigation of feature descriptors, and of related extraction and indexing techniques. In this paper we propose a visual query paradigm for CBVR which develops on mosaicing- a well-known technique in computer vision and graphics for creating a comprehensive overview of a scene reproduced in a set of images. The language underlying the paradigm supports querying for video shots by specifying camera motion, as well as motion and visual appearance of objects. This approach supports a consistent reproduction of the spatio-temporal nature of videos in both query specification and visualization of retrieval results, which enables users to specify and refine queries in an iterative way.
Systems for content based image retrieval typically support access to database images through the query-by-example paradigm. This includes query-by-image and query-by-sketch. Since query-by-sketch can be difficult in some cases-lack of sketching abilities, difficulty to detect distinguishing image features-querying is generally performed through the query-by-image paradigm. A limiting factor of this paradigm is that a single sample image rarely includes all and only the characterizing elements the user is looking for. Querying using multiple examples is a possible solution to overcome this limitation. In this paper some issues and solutions for retrieval by content using positive and negative examples are presented and discussed.
Conference Committee Involvement (13)
Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2015
10 February 2015 | San Francisco, California, United States
Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2014
3 February 2014 | San Francisco, California, United States
Multimedia Content Access: Algorithms and Systems VII
4 February 2013 | Burlingame, California, United States
Multimedia Content Access: Algorithms and Systems VI
23 January 2012 | Burlingame, California, United States
Multimedia Content Access: Algorithms and Systems V
25 January 2011 | San Francisco Airport, California, United States
Multimedia Content Access: Algorithms and Systems IV
21 January 2010 | San Jose, California, United States
Multimedia Content Access: Algorithms and Systems III
21 January 2009 | San Jose, California, United States
Multimedia Content Access: Algorithms and Systems II
30 January 2008 | San Jose, California, United States
Multimedia Content Access: Algorithms and Systems
31 January 2007 | San Jose, CA, United States
Internet Imaging VII
18 January 2006 | San Jose, California, United States
Internet Imaging VI
19 January 2005 | San Jose, California, United States
Internet Imaging V
19 January 2004 | San Jose, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.