Thousands of sensors are connected to the Internet and many of these sensors are cameras. The “Internet of
Things” will contain many “things” that are image sensors. This vast network of distributed cameras (i.e. web
cams) will continue to exponentially grow. In this paper we examine simple methods to classify an image from
a web cam as “indoor/outdoor” and having “people/no people” based on simple features. We use four types of
image features to classify an image as indoor/outdoor: color, edge, line, and text. To classify an image as having
people/no people we use HOG and texture features. The features are weighted based on their significance and
combined. A support vector machine is used for classification. Our system with feature weighting and feature
combination yields 95.5% accuracy.
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
Planning a trip needs to consider many unpredictable factors along the route such as traffic, weather, accidents, etc. People are interested viewing the places they plan to visit and the routes they plan to take. This paper presents a system with an Android mobile application that allows users to: (i) Watch the live feeds (videos or snapshots) from more than 65,000 geotagged public cameras around the world. The user can select the cameras using an interactive world map. (ii) Search for and watch the live feeds from the cameras along the route between a starting point and a destination. The system consists of a server which maintains a database with the cameras' information, and a mobile application that shows the camera map and communicates with the cameras. In order to evaluate the system, we compare it with existing systems in terms of the total number of cameras, the cameras' coverage, and the number of cameras on various routes. We also discuss the response time of loading the camera map, finding the cameras on a route, and communicating with the cameras.
As images are increasingly used in wireless communication, such as mobile phones and PDAs, it
is important to reduce the energy consumption for transmitting and receiving images. The energy is
approximately proportional to the sizes (numbers of bytes) of the images. Many existing techniques aim
to improve compression ratios while preserving the image fidelity without perceivable differences. In this
paper, we propose a new approach by allowing visible distortion in the images. Our method eliminates
or reduces the fine details (textures) so that new images have smaller file sizes and require less energy
to transmit or receive. Even though new images may be visually different, the essential information is
preserved. Our experiment uses 400 images and achieves up to 40.1% and average 32.2% reduction in
file sizes.
Recent trends have created new challenges in the presentation of multimedia information. First, large, high-resolution video displays are increasingly popular. Meanwhile, many mobile devices, such as PDAs and mobile telephones, can display images and videos on small screens. One obvious issue is that content designed for a large display is inappropriate for a small display. Moreover, wireless bandwidth and battery lifetime are precious resources for mobile devices. In order to provide useful content across systems with different resources, we propose "resource-driven content adaptation" by augmenting the content with metadata that can be used to display or render the content based on the available resources. We are investigating several problems related to resource-driven content adaptation. These include: adaptation of the presented content based on available resources- display resolution, bandwidth, processor speed, quality of services, and energy. Content adaptation may add or remove information based on available resources. Adaptive content can utilize resources more effectively but also present challenges in resource management, content creation, transmission, and user perception.
As mobile systems (such as laptops and mobile telephones) continue growing, navigation assistance and location-based services are becoming increasingly important. Existing technology allow mobile users to access Internet services (e.g. email and web surfing), simple multimedia services (e.g. music and video clips), and make telephone calls. However, the potential of advanced multimedia services has not been fully developed, especially multimedia for navigation or location based services. At Purdue University, we are developing an image database, known as LAID, in which every image is annotated with its location, compass heading, acquisition time, and weather conditions. LAID can be used to study several types of navigation problems: A mobile user can take an image and transmit the image to the LAID sever. The server compares the image with the images stored in the database to determine where the user is located. We refer to this as the "forward" navigation problem. The second type of problem is to provide a "virtual tour on demand". A user inputs a starting and an ending addresses and LAID retrieves the images along a route that connects the two addresses. This is a generalization of route planning. Our database currently contains over 20000 images and covers approximately 25% of the city of West Lafayette, Indiana.
In this paper we describe some of the research issues and challenges
in image-based location awareness and navigation. We will describe two
systems being developed at Purdue University as testbeds for our
ideas. The main system architecture combines image processing,
mobility, wireless communication, and location awareness. We will
describe two fundamental scenarios for using images to aid in mobile
navigation problems. The first provides the ability to use a locally
acquired image to determine the identity of an object, for example a
building, as one roams in an area. The second problem is the use of
images in a database to aid in vehicle navigation. The solution to
these problems use location information, such as GPS signals, to
compare and search location-annotated images in a database. We believe
location information can improve the accuracy in image database
search.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.