Commercial multispectral satellite datasets, such as WorldView-2 and Geoeye-1 images, are often delivered with a high-spatial resolution panchromatic image (PAN) as well as a corresponding lower resolution multispectral image (MSI). Certain fine features are only visible on the PAN but are difficult to discern on the MSI. To fully utilize the high-spatial resolution of the PAN and the rich spectral information from the MSI, a pan-sharpening process can be carried out. However, difficulties arise in maintaining radiometric accuracy, particularly for applications other than visual assessment. We propose a fast pan-sharpening process based on nearest-neighbor diffusion with the aim to enhance the salient spatial features while preserving spectral fidelity. Our approach assumes that each pixel spectrum in the pan-sharpened image is a weighted linear mixture of the spectra of its immediate neighboring superpixels; it treats each spectrum as its smallest element of operation, which is different from the most existing algorithms that process each band separately. Our approach is shown to be capable of preserving salient spatial and spectral features. We expect this algorithm to facilitate fine feature extraction from satellite images.
Feature extraction from satellite imagery is a challenging topic. Commercial multispectral satellite data sets,
such as WorldView 2 images, are often delivered with a high spatial resolution panchromatic image (PAN) as
well as a corresponding low-resolution multispectral spectral image (MSI). Certain fine features are only visible
on the PAN but difficult to discern on the MSI. To fully utilize the high spatial resolution of the PAN and the
rich spectral information from the MSI, a pan sharpening process can be carried out. In this paper, we propose
a novel and fast pan sharpening process based on anisotropic diffusion with the aim to aid feature extraction
that enhances salient spatial features. Our approach assumes that each pixel spectrum in the pan-sharpened
image is a weighted linear mixture of the spectra of its immediate neighboring superpixels; it treats spectrum
as its smallest element of operation, which is different from most existing algorithms that process each band
separately. Our approach is shown to be capable of preserving salient features. In addition, the process is highly
parallel with intensive neighbor operations and is implemented on a general purpose GPU card with NVIDIA
CUDA architecture that achieves approximately 25 times speedup for our setup. We expect this algorithm to
facilitate fine feature extraction from satellite images.
A novel approach for automated road network extraction from multispectral WorldView-2 imagery using a knowledge-based system is presented. This approach uses a multispectral flood-fill technique to extract asphalt pixels from satellite images; it follows by identifying prominent curvilinear structures using template matching. The extracted curvilinear structures provide an initial estimate of the road network, which is refined by the knowledge-based system. This system breaks the curvilinear structures into small segments and then groups them using a set of well-defined rules; a saliency check is then performed to prune the road segments. As a final step, these segments, carrying road width and orientation information, can be reconstructed to generate a proper road map. The approach is shown to perform well with various urban and suburban scenes. It can also be deployed to extract the road network in large-scale scenes.
Heavy rain from Tropical Storm Lee resulted in a major flood event for the southern tier of New York State in early September 2011 causing evacuation of approximately 20,000 people in and around the city of Binghamton. In support of the New York State Office of Emergency Management, a high resolution multispectral airborne sensor (WASP) developed by RIT was deployed over the flooded area to collect aerial images. One of the key benefits of these images is their provision for flood inundation area mapping. However, these images require a significant amount of storage space and the inundation mapping process is conventionally carried out using manual digitization. In this paper, we design an automated approach for flood inundation mapping from the WASP airborne images. This method employs Spectral Angle Mapper (SAM) for color RGB or multispectral aerial images to extract the flood binary map; then it uses a set of morphological processing and a boundary vectorization technique to convert the binary map into a shapefile. This technique is relatively fast and only requires the operator to select one pixel on the image. The generated shapefile is much smaller than the original image and can be imported to most GIS software packages. This enables critical flood information to be shared with and by disaster response managers very rapidly, even over cellular phone networks.
We present an approach for automatically building a road network graph from multispectralWorldView II images
in suburban and urban areas. In this graph, the road parts are represented by edges and their connectivity by
vertices. This approach consists of an image processing chain utilizing both high-resolution spatial features as
well as multiple band spectral signatures from satellite images. Based on an edge-preserving filtered image, a
two-pass spatial-spectral flood fill technique is adopted to extract a road class map. This technique requires
only one pixel as the initial training set and collects spatially adjacent and spectrally similar pixels to the initial
points as a second level training set for a higher accuracy asphalt classification. Based on the road class map, a
road network graph is built after going through a curvilinear detector and a knowledge based system. The graph
projects a logical representation of the road network in an urban image. Rules can be made to filter salient road
parts with different width as well as ruling out parking lots from the asphalt class map. This spatial spectral
joint approach we propose here is capable of building up a road network connectivity graph and this graph lays
a foundation for further road related tasks.
On March 11, 2011, the magnitude 9 Tohoku earthquake and resulting tsunami struck off the coast of Japan.
An estimated over 400,000 persons were displaced from their homes and the damage to the coastline and nearby
urban areas was extensive. Additionally, the combined effects of the earthquake and tsunami caused damage
to the Fukushima Dai'ichi Nuclear Power Station. As part of the International Charter "Space and Major
Disasters", the US Geological Survey coordinated a volunteer effort to aid in the response to the disaster. The
goal of the project was to produce maps derived from civilian (NASA Landsat and ASTER) and commercially
available (DigitalGlobe and GeoEye), high resolution satellite imagery to be delivered to the Japanese authorities.
RIT, as part of our Information Products Laboratory for Emergency Response (IPLER) program, was one of the
organizations involved in this effort. This paper describes the timeline of the response, the challenges faced in
this effort, the workflow developed, and the products that were distributed. Lessons learned from the response
will also be described to aid the remote sensing community in preparing for responses to future natural disasters.
In this paper, we present a new approach to filtering high spatial resolution multispectral (MSI) or hyperspectral
imagery (HSI) for the purpose of classification and segmentation. Our approach is inspired by the bilateral
filtering method that smooths images while preserving important edges for gray-scale and color images. To
achieve a similar goal for MSI/HSI, we build a nonlinear tri-lateral filter that takes into account both spatial
and spectral similarities. Our approach works on a pixel by pixel basis; the spectrum of each pixel in the filtered
image is the combination of the spectra of its adjacent pixels in the original image weighted by the three factors:
geometric closeness, spectral Euclidean distance and spectral angle separation. The approach reduces small
clutter across the image while keeping edges with strong contrast. The improvement of our method is that
we use both spectral intensity differences together with spectral angle separation as the closeness metric, thus
preserving edges caused both by material as well as by similar materials with intensity differences. A k-means
classifier is applied to the filtered image and the results show our approach can produce a much less cluttered
class map. Results will be shown using imagery from the Digital Globe Worldview-2 multispectral sensor and
the HYDICE hyperspectral sensor. This approach could also be expanded to facilitate feature extraction from
MSI/HSI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.