PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8180, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the characteristics of pansharpening methods based on pixel modulation are investigated. It is found that
all the modulation-based fusion methods significantly benefit, both visually and numerically, from setting a constraint
also on the modulus of the detail vector, by means of a damping factor. Experiments on VHR MS+Pan data sets from
different instruments highlight that such factor is always lower than one and depends on the instrument as well as on the
landscape. Instead of a trial-and-error optimization, the value of the damping factor can be determined by minimizing some
measurement of spatial distortion of fusion products. Sample values are reported for IKONOS and QuickBird instruments
and different types of land cover.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of several images of various modalities has been proved to be useful for solving problems arising in many
different applications of remote sensing. The main reason is that each image of a given modality conveys its own part of
specific information, which can be integrated into a single model in order to improve our knowledge on a given area.
With the large amount of available data, any task of integration must be performed automatically. At the very first stage
of an automated integration process, a rather direct problem arises : given a region of interest within a first image, the
question is to find out its equivalent within a second image acquired over the same scene but with a different modality.
This problem is difficult because the decision to match two regions must rely on the common part of information
supported by the two images, even if their modalities are quite different. In this paper, we propose a new method to
address this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the results for the estimation of dwellings structures in Al Salam IDP Camp, Southern Darfur, based
on Very High Resolution multispectral satellite images obtained by implementation of Mathematical Morphology
analysis. A series of image processing procedures, feature extraction methods and textural analysis have been applied in
order to provide reliable information about dwellings structures. One of the issues in this context is related to similarity
of the spectral response of thatched dwellings' roofs and the surroundings in the IDP camps, where the exploitation of
multispectral information is crucial. This study shows the advantage of automatic extraction approach and highlights the
importance of detailed spatial and spectral information analysis based on multi-temporal dataset. The additional data
fusion of high-resolution panchromatic band with lower resolution multispectral bands of WorldView-2 satellite has
positive influence on results and thereby can be useful for humanitarian aid agency, providing support of decisions and
estimations of population especially in situations when frequent revisits by space imaging system are the only possibility
of continued monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modulation transfer function (MTF) is applied to the high frequency modulation fusion in this paper. Firstly, MTFs are
calculated using the edge method, and 2-dimension MTF-filters are properly designed. Secondly, MTF-filters are used
for degrading original high resolusion images. High frequency modulation fusion parameters are then obtained under the
minimum mean square error criterion. The results show that fusion images derived from the improved high frequency
modulation based on MTF method have spatial resolution close to non-degraded pan images. Compared with fusion
methods of weighted high-pass filtering (w-HPF), MTF general image fusion framework (MTF-GIF), the improved
method performs well in terms of preservation of spectral information and spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously, we proposed a color enhancement method, which enhances the spectral features in multispectral images
without changing the average color distribution, since the natural color of the object is sometimes important. In the
proposed method, a user can specify both the spectral band to extract the spectral feature and the color to be visualized,
respectively. This method can visualize spectral features even if the wavelength of the specified spectral band is in non-visible
range or the image has a large number of spectral bands. That is, the enhancement method is also effective for
hyperspectral images which are often used in remote sensing applications. If we can find the meaningful spectral features
in them, those features might be employed as novel indices. In this paper, we apply the enhancement method to
hyperspectral images of rice paddy and trees. We observed the enhanced results of these images, and could find the
spectral features which might be useful to discriminate different species and even the different conditions in the same
species. The results showed that these features might be utilized for novel indices or other applications in remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D models are often lacking a photorealistic appearance, due to low quality of the acquired texture, or to the complete
absence of it. Moreover, especially in case of reality based models, it is often of specific interest to texture with images
different from photos, like multispectral/multimodal views (InfraRed, X-rays, UV fluorescence etc), or images taken in
different moments in time. In this work, a fully automatic approach for texture mapping is proposed. The method relies
on the automatic extraction from the model geometry of appropriate depth maps, in form of images, whose pixels
maintain an exact correspondence with vertices of the 3D model. A multiresolution greedy method is then proposed to
generate the candidate depth maps which could be related with the given texture. In order to select the best match, a
suited similarity measure is computed, based on Maximixation of Mutual Information (MMI). 3D texturing is then
applied to the portion of the model which is visualized in the texture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
RapidEye AG is a commercial provider of geo-spatial information products derived from Earth observation image data.
The source of this data is the RapidEye constellation of five low-earth-orbit imaging satellites. Image data from satellite
electro-optical sensors contains spatial artifacts such as banding and streaking that are caused by detector responsivity
variations, factors related to image formation, and the space environment. This paper describes the results of a relative
radiometric calibration and correction campaign that was conducted between March and July 2011 using the side-slither
technique. Radiometrically uniform terrestrial scenes that included desert and snow/ice regions were imaged with a
RapidEye sensor in a ninety-degree yaw orbital configuration. In this configuration each detector on the focal plane was
positioned parallel to the ground-track direction thereby exposing each detector to the light reflected from the same
segment of the ground. This maneuver produced a radiometrically flat-field input to the sensor so that the relative
response of each detector was determined for the same exposure level. Side-slither derived detector correction
parameters were then used to improve the quality of RapidEye imagery that contained noticeable spatial artifacts. A
significant improvement in image correction was achieved when compared to our standard correction procedures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support Vector Machines (SVM) have been widely adopted by the remote sensing community in the last decade.
The standard algorithm has been mainly applied to image classication tasks. Many advanced developments
based on SVM have been introduced as well. This paper, nevertheless, revises the standard formulation of SVM.
An important part of the paper is about the intuition on the SVM parts: the cost, the regularizer and the free
parameters. Finally, the paper revises three interesting simple modications well suited to tackle remote sensing
image classication: constraining the margin, including invariances and the information of unlabeled samples.
Some examples are given to illustrate these concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparative study in order to analyze active learning (AL) and semi-supervised learning (SSL) for
the classification of remote sensing (RS) images. The two learning paradigms are analyzed both from the theoretical and
experimental point of view. The aim of this work is to identify the advantages and disadvantages of AL and SSL
methods, and to point out the boundary conditions on the applicability of these methods with respect to both the number
of available labeled samples and the reliability of classification results. In our experimental analysis, AL and SSL
techniques have been applied to the classification of both synthetic and real RS data, defining different classification
problems starting from different initial training sets and considering different distributions of the classes. This analysis
allowed us to derive important conclusion about the use of these classification approaches and to obtain insight about
which one of the two approaches is more appropriate according to the specific classification problem, the available initial
training set and the available budget for the acquisition of new labeled samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the problems in semi-supervised land classification tasks lies in improving classification results without increasing
the number of pixels to be labeled. This would be possible if, instead of increasing the amount of data we
increased the reliability of the data. We suggest to replace the random selection by a unsupervised clustering based
selection strategy in building the training data. We use a mode seeking clustering method to search for cluster representatives,
which will be labeled and then used for training. Here an improvement to the result of the clustering algorithm
is introduced by taking advantage of the spatial information in the image. The number of selected samples provided
by the clustering can be reduced by using a spatial-density criterion to dismiss redundant training information. Two
different alternatives are considered for a spatial criterion, one dismisses selected samples in the same neighbourhood
and the other includes the pixel coordinates for giving the spatial information a larger weight in the clustering. Both
alternatives improve the classification-segmentation results. The classification scheme with training selection provides
state-of-the-art pixel classification results using a smaller training set and suggests an alternative to random selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of land-cover maps updating by classifying multitemporal remote sensing images (i.e.,
images acquired on the same area at different times) in the context of change-detection-driven active transfer learning.
The proposed method is based on the assumption that training samples are available for one of the available
multitemporal images (i.e., source domain), whereas they are not for the others (i.e., target domain). In order to
effectively classify the target domain (i.e., update the maps obtained for the source domain according to the new
information brought from another acquisition) we present a novel approach to automatically define a training set for the
target domain taking advantage of its temporal correlation with the source domain. The proposed method is based on
four steps. In the first step unsupervised change detection is applied to multitemporal images (i.e., target and source
domains). Labels of detected unchanged training samples are propagated from the source to the target domain in the
second step, thus becoming its initial training set. In the third step, changed areas are statistically compared with land-cover
classes in the target domain training set. This information is used to drive the initial training set expansion by
Active Learning (AL). In the first expansion iterations priority is given to samples detected as being changed, in the next
ones the most informative samples are selected from a pool including both changed and unchanged unlabeled samples
(i.e., priority is removed). At convergence of the AL process, the target image is classified (fourth step). To this, in this
paper we use a Support Vector Machine classifier. Experimental results show that transferring the class-labels from
source domain to target domain provides a reliable initial training set and that the priority rule for AL involves a faster
convergence to the desired accuracy with respect to standard AL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution studies a feature extraction technique aiming at reducing differences between domains in image
classification. The purpose is to find a common feature space between labeled samples issued from a source image
and test samples belonging to a related target image. The presented approach, Transfer Component Analysis,
finds a transformation matrix performing a joint mapping of the two domains by minimizing a probability
distribution distance measure, the Maximum Mean Discrepancy criterion. When predicting on a target image,
such a projection allows to apply a supervised classifier trained exclusively on labeled source pixels mapped
in this common latent subspace. Promising results are observed on a urban scene captured by a hyperspectral
image. The experiments reveal improvements with respect to a standard classification model built on the original
source image and other feature extraction techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing is a field that has wide use, leading to the fact that it has a great importance. Therefore
performance of selected features plays a great role. In order to gain some perspective on useful textural features,
we have brought together state-of-art textural features in recent literature, yet to be applied in remote sensing
field, as well as presenting a comparison with traditional ones. Therefore we selected most commonly used
textural features in remote sensing that are grey-level co-occurrence matrix (GLCM) and Gabor features. Other
selected features are local binary patterns (LBP), edge orientation features extracted after applying steerable
filter, and histogram of oriented gradients (HOG) features. Color histogram feature is also used and compared.
Since most of these features are histogram-based, we have compared performance of bin-by-bin comparison with
a histogram comparison method named as diffusion distance method. During obtaining performance of each
feature, k-nearest neighbor classification method (k-NN) is applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main purpose of this paper is to propose and to validate a new multi-temporal algorithm for hyperspectral
endmembers extraction. The advanced approach is based on multi-linear algebra, spectral analysis and tensor
data structure for each pixel. The detection of an endmember in the time series is done by the interpretation
of the spatial-temporal signature in a multi-dimensional tonsorial space. Thus, the images could have different
resolutions and could be coming from different dates. A multi-temporal synthetic and Hyperion series images
were used to assess the effectiveness of the proposed algorithm. The obtained results show good performances
with both permanent and temporal known features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Morphological profiles (MPs) have been effective tools to fuse spectral and spatial information for the classification of
remote sensing data. However, the previous applications have been limited to the multi-/ hyper-spectral data analysis. In
this study, the application of morphological profiles is extended for the classification of polarimetric synthetic aperture
radar (POLSAR) data. The MPs are constructed with the diagonal elements of the covariance matrix and the features
derived from the eigenvalue decomposition method. The resulting extended morphological profile (EMP) which is a
stack of all the MPs of various features is used for supervised classification of the images using a support vector machine
(SVM) classifier. It is shown that significant improvements in classification accuracies can be achieved by using the
profiles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A two-step strategy for endmember extraction is presented. The goal of the first step is to create two pools of spectra,
one containing potential endmember candidates and the other one representing spectra that are unquestionably convex
combinations (mixed spectra). The second step consists in the application of a sub-optimal subset search method that is
applied for best endmember combination. In the first step, vector order statistics are used to identify a medoid spectrum
within non-overlapping spatial windows. Endmember extraction based on the iterative error analysis algorithm is then
performed on the medoid subset to identify a set of medoid endmembers. The latter are subsequently used to spectrally
unmix the original dataset. Spectra that are outside the hyper-surface (outliers) derived from the medoid endmembers
represent the pool of potential endmembers. Medoid spectra residing inside the hyper-surface (inliers) constitute the
mixed spectra pool. The inliers/outliers status of each spectrum of the original dataset is derived from conditions on their
computed unmixing fraction values. Clustering analysis is next performed on the endmember pool of candidates to
produce a set of exemplars. Spectral screening is applied on the inliers set to eliminate redundancy. In the second step,
the oscillating feature subset search algorithm is applied to identify the endmember combination that best reconstruct, in
the least squares sense, the spectra in the joint pools. Results of the proposed strategy are presented for synthetic and real
hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of content-based image retrieval (CBIR) is to retrieve, from real data stored in a database, information
that is relevant to a query. A major challenge for the development of efficient CBIR systems in the
context of hyperspectral remote sensing applications is how to deal with the extremely large volumes of data
produced by current Earth-observing (EO) imaging spectrometers. The data resulting from EO campaigns often
comprises many Gigabytes per flight. When multiple instruments or timelines are combined, this leads to
the collection of massive amounts of data coming from heterogeneous sources, and these data sets need to be
effectively stored, managed, shared and retrieved. Furthermore, the growth in size and number of hyperspectral
data archives demands more sophisticated search capabilities to allow users to locate and reuse data acquired
in the past. In this paper we develop a new strategy to effectively retrieve hyperspectral image data sets using
spectral unmixing concepts. Spectral unmixing is a very important task for hyperspectral data exploitation
since the spectral signatures collected in natural environments are invariably a mixture of the pure signatures
of the various materials found within the spatial extent of the ground instantaneous field view of the imaging
instrument. In this work, we use the information provided by spectral unmixing (i.e. the spectral endmembers
and their corresponding abundances in the scene) as effective meta-data to develop a new CBIR system that
can assist users in the task of efficiently searching hyperspectral image instances in large data repositories. The
proposed approach is validated using a collection of 154 hyperspectral data sets (comprising seven full flightlines)
gathered by NASA using the Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the World Trade
Center (WTC) area in New York City during the last two weeks of September, 2001, only a few days after the
terrorist attacks that collapsed the two main towers and other buildings in the WTC complex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Oil spill events are a crucial environmental issue. Detection of oil spills is important for both oil exploration and
environmental protection. In this paper, investigation of hyperspectral remote sensing is performed for the detection of
oil spills and the discrimination of different oil types. Spectral signatures of different oil types are very useful, since they
may serve as endmembers in unmixing and classification models. Towards this direction, an oil spectral library, resulting
from spectral measurements of artificial oil spills as well as of look-alikes in marine environment was compiled. Samples
of four different oil types were used; two crude oils, one marine residual fuel oil, and one light petroleum product. Lookalikes
comprise sea water, river discharges, shallow water and water with algae. Spectral measurements were acquired
with spectro-radiometer GER1500. Moreover, oil and look-alikes spectral signatures have been examined whether they
can be served as endmembers. This was accomplished by testifying their linear independence. After that, synthetic
hyperspectral images based on the relevant oil spectral library were created. Several simplex-based endmember
algorithms such as sequential maximum angle convex cone (SMACC), vertex component analysis (VCA), n-finder
algorithm (N-FINDR), and automatic target generation process (ATGP) were applied on the synthetic images in order to
evaluate their effectiveness for detecting oil spill events occurred from different oil types. Results showed that different
types of oil spills with various thicknesses can be extracted as endmembers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral
imaging applications. These are challenging problems as the measured spectra in hyperspectral images
from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface,
e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different
components of the incident light. These light components are subsequently used to predict what a measured
spectrum would look like under different light conditions. The derived method is evaluated using an urban
hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from
LIDAR 3D data acquired simultaneously with the hyperspectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an innovative classification method dedicated to hyperspectral images which uses
both spectral information (Principal Component Analysis bands, Minimum Noise Fraction bands) and spatial
information (textural features and segmentation). The process includes a segmentation as a pre-processing step,
a spatial/spectral features calculation step and finally an area-wise classification. The segmentation, a region
growing method, is processed according to a criterion called J-image which avoids the risks of over-segmentation
by considering the homogeneity of an area at a textural level as well as a spectral level. Then several textural
and spectral features are calculated for each area of the segmentation map and these areas are classified with
a hierarchical ascendant classification. The method has been applied on several data sets and compared to the
Gaussian Mixture Model classification. The JSEG classification process finally appeared to gives equivalent, and
most of the time more accurate classification results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change Detection and Analysis of Multitemporal Images
This contribution deals with change detection by means of sparse principal component analysis (PCA) of simple
differences of calibrated, bi-temporal HyMap data. Results show that if we retain only 15 nonzero loadings (out
of 126) in the sparse PCA the resulting change scores appear visually very similar although the loadings are
very different from their usual non-sparse counterparts. The choice of three wavelength regions as being most
important for change detection demonstrates the feature selection capability of sparse PCA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two HyMap images acquired over the same lignite open-pit mining site in Sokolov, Czech Republic, during the
summers of 2009 and 2010 (12 months apart), were investigated in this study. The site selected for this research is one of
three test sites (the others being in South Africa and Kyrgyzstan) within the framework of the EO-MINERS FP7 Project
(http://www.eo-miners.eu). The goal of EO-MINERS is to "integrate new and existing Earth Observation tools to
improve best practice in mining activities and to reduce the mining related environmental and societal footprint".
Accordingly, the main objective of the current study was to develop hyperspectral-based means for the detection of small
spectral changes and to relate these changes to possible degradation or reclamation indicators of the area under
investigation. To ensure significant detection of small spectral changes, the temporal domain was investigated along with
careful generation of reflectance information. Thus, intensive spectroradiometric ground measurements were carried out
to ensure calibration and validation aspects during both overflights. The performance of these corrections was assessed
using the Quality Indicators setup developed under a different FP7 project-EUFAR (http://www.eufar.net), which
helped select the highest quality data for further work. This approach allows direct distinction of the real information
from noise. The reflectance images were used as input for the application of spectral-based change-detection algorithms
and indices to account for small and reliable changes. The related algorithms were then developed and applied on a
pixel-by-pixel basis to map spectral changes over the space of a year. Using field spectroscopy and ground truth
measurements on both overpass dates, it was possible to explain the results and allocate spatial kinetic processes of the
environmental changes during the time elapsed between the flights. It was found, for instance, that significant spectral
changes are capable of revealing mineral processes, vegetation status and soil formation long before these are apparent to
the naked eye. Further study is being conducted under the above initiative to extend this approach to other mining areas
worldwide and to improve the robustness of the developed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In remote sensing data, trees have a low interspecies variability and show a high variability within the tree
species. Therefore, specific features that distinguish between unique properties of two tree species are required
for a single tree based genera classification. To improve classification results, the suitability of seven surface
roughness features, calculated on single tree crown regions, is studied. The algorithms developed to provide
roughness parameters can be validated and prototyped in a Virtual Forest testbed. The features are extracted
from a normalized digital surface model with a resolution of 0.4m per pixel. Within the test area of 340km2 more than 4000 single trees of eleven different species and additionally 200 buildings are available as reference
data. Technical standards define several parameters to describe surface properties. These roughness features
are evaluated in the context of single tree crowns. All of these features are based on the deviation of the height
values of the tree crown to its mean height. As an additional feature the relationship between the crown's surface
area and its occupied ground area is used. The evaluation results of these features regarding the discrimination
of tree species on different levels - eleven single tree species, seven tree classes, deciduous and coniferous - and
also towards discrimination of trees from buildings will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we introduce dynamic and scalable Synthetic Aperture Radar (SAR) terrain classification based on the
Collective Network of Binary Classifiers (CNBC). The CNBC framework is primarily adapted to maximize the SAR
classification accuracy on dynamically varying databases where variations do occur in any time in terms of (new)
images, classes, features and users' relevance feedback. Whenever a "change" occurs, the CNBC dynamically and
"optimally" adapts itself to the change by means of its topology and the underlying evolutionary method MD PSO.
Thanks to its "Divide and Conquer" type approach, the CNBC can also support varying and large set of (PolSAR)
features among which it optimally selects, weighs and fuses the most discriminative ones for a particular class. Each
SAR terrain class is discriminated by a dedicated Network of Binary Classifiers (NBC), which encapsulates a set of
evolutionary Binary Classifiers (BCs) discriminating the class with a distinctive feature set. Moreover, with each
incremental evolution session, new classes/features can be introduced which signals the CNBC to create new
corresponding NBCs and BCs within to adapt and scale dynamically to the change. This can in turn be a significant
advantage when the current CNBC is used to classify multiple SAR images with similar terrain classes since no or only
minimal (incremental) evolution sessions are needed to adapt it to a new classification problem while using the
previously acquired knowledge. We demonstrate our proposed classification approach over several medium and highresolution
NASA/JPL AIRSAR images applying various polarimetric decompositions. We evaluate and compare the
computational complexity and classification accuracy against static Neural Network classifiers. As CNBC classification
accuracy can compete and even surpass them, the computational complexity of CNBC is significantly lower as the
CNBC body supports high parallelization making it applicable to grid/cloud computing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimation of noise characteristics is used in various image processing tasks such as edge detection, filtering,
reconstruction, compression and segmentation, etc. It is very desirable to have as accurate as possible estimated noise
characteristics which influence the quality of further processing. This paper deals with evaluation of accuracy of earlier
proposed methods for blind estimation of speckle characteristics. Evaluation is done for TerraSAR-X single-look
amplitude images. It is shown that the obtained estimates depend upon image complexity. Besides, parameters of any
estimation method influence accuracy (bias) as well. Finally, spatial correlation of noise is yet another factor affecting
the obtained estimates. As it is demonstrated, blind estimation in aggregate allows to obtain the estimates of speckle
variance with relative error up to 20%, which is appropriate for practical needs. Besides, if speckle variance is
estimated, it becomes possible to get accurate estimates of noise spatial correlation in DCT domain. Such estimates can
be used in e.g. DCT-based filtering of SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint Session with Conference 8179: SAR Data Analysis I
This work investigates the fixed-point polarimetric whitening filter (FP-PWF) with respect to ship detection
based on polarimetric synthetic aperture radar (SAR) imagery. The purposes of this work are: (i) to investigate
the FP-PWF algorithm that incorporate texture, (ii) to examine the method of log-cumulants (MoLC) for shape
parameter estimation associated with texture, and (iii) to assess the impact of the improved modeling and estimation
on the discrepancy between specified and observed false alarm rate. A modified ship detection algorithm
based on FP-PWF is proposed with improved modeling, estimation and detection performance. Experiments
are performed on simulated radar ocean clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method to obtain 3-dimensional Computer-Aided Design (CAD) models of radar targets from their real
shapes is proposed for the construction of a database which is composed of scattering centers of the targets and a
consideration of an efficient formation of the database from the appropriate collection of Radar Cross Section (RCS) data
is also described. As 3-dimensional CAD models of the targets are not available in many cases, a method to make a
geometric model from the real target is needed. Three dimensional coordinates of the target can be measured by a laser
scanner. These measured coordinates are combined to form a 3D CAD model of the target. With the CAD model
obtained, RCS values of the target are calculated over a series of frequencies and angle apertures to be transformed into
scattering centers by a superresolution technique, which is Matrix Pencil (MP) in this case. The CAD model of an airtarget
is utilized for the test to infer the criterion on the frequency of the sets of extracted scattering centers for the
optimal database construction, where RCS data sets are calculated every 5, 10, 15 and 20 degrees in azimuth direction to
be used for the scattering center extraction. And then, RCS values are reconstructed from those sets of scattering centers
to compare with the original RCS values of the target to determine how many sets of scattering centers our database
should have over the whole azimuth angles. The result show that the smaller the angle gap between the adjacent sets of
scattering centers, the better match between the original RCS and the reconstructed RCS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint Session with Conference 8179: SAR Data Analysis II
Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations,
where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to
be detected and optional information such as size, orientation, or object/ship class is desired. In recent
research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and
classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses
to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively
"ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now
able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise
suppression using the different polarizations, we slightly improve both the segmentation and the classification
process. In several experiments we demonstrate the potential benefit for segmentation and classification.
Precision of size and orientation estimation as well as correct classification rates are calculated individually
for single- and quad-polarization and compared to each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The major purpose of the present study is to develop a computer code that predicts infrared signals from 3D objects by
considering spectral surface properties. In this study, infrared images from the 3D objects are created by calculating the
self emitted component and the reflected components by the solar and sky shines. For the reflected components, the
BRDF(Bi-Directional Reflectance Distribution Function) which provides a method of describing reflectance as a
function of incident and reflected angles and wavelength is used to explain the reflection characteristics of object`s
surface. The multiple-reflection effects by using the view factor are included in analyzing the radiative exchange
between adjoining meshes where the shadow effect is also included in this calculation. The infrared signals and images
obtained by using the software developed in this study and a commercial software (RadThermIR) are compared each
other. Results obtained by using the software developed in this study show fairly good agreement with those obtained by
the commercial software. Results also show that the reflected radiance is more important in the MWIR images where the
reflected radiance is dominant than in the LWIR images where the self emitted radiance is dominant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this work is to evaluate the performance of different schemes of lossless compression when applied to compact
images collected by the satellite CBERS-2B. This satellite is the third one constructed under the CBERS Program (China-
Brazil Earth Resources Satellite) and it was launched in 2007. This work focuses in the compression of images from the
CCD camera which has a resolution of 20 x 20 meters and five bands. CBERS-2B transmits the data from CCD in real
time, with no compression and it does not storage even a small part of images. In fact, this satellite can work in this way
because the bit rate produced by CCD is smaller than the transmitter bit rate. However, the resolution and the number of
spectral bands of imaging systems are increasing and the constrains in power and bandwidth bound the communication
capacity of a satellite channel. Therefore, in the future satellites the communication systems must be reviewed. There are
many algorithms for image compression described in the literature and some of them have already been used in remote
sensing satellites (RSS). When the bit rate produced by the imaging system is much higher than the transmitter bit rate,
a lossy encoder must be used. However, when the gap between the bit rates is not so high, a lossless procedure can be
an interesting choice. This work evaluates JPEG-LS, CALIC, SPIHT, JPEG2000, CCSDS recommendation, H.264, and
JPEG-XR when they are used to compress images from the CCD camera of CBERS-2B with no loss. The algorithms are
applied in a set of twenty images with 5, 812 x 5, 812 pixels, running in blocks of 128 x 128; 256 x 256; 512 x 512; and
1, 024x1, 024 pixels. The tests are done independently in each original band and also in five transformed bands, obtained
by a procedure which decorrelates them. In general, the results have shown that algorithms based on predictive schemes
(CALIC and JPEG-LS) applied in transformed decorrelated bands produces a better performance in the mean. Furthermore,
as expected, the performance improves when the block length increases. Since the compression rate is variable for each
block, it is important to evaluate the distribution of this parameter. Preliminary results have shown that the distributions
are quite similar for all algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the effect of dimensionality reduction of hyperspectral data on 10 subpixel target detectors is investigated.
The genetic algorithm (GA) and wavelet feature extraction methods are used for dimensionality reduction as they
maintain physically meaningful bands and physical structure of the spectra, respectively. In the former case, the
wrapper method is used to improve subpixel target detectors' results in terms of the area under the curve (AUC) of the
receiver operating characteristic (ROC) curve. Meanwhile, in the latter case, the AUC is used as a criterion to choose the
optimum level of wavelet decomposition. Experimental results obtained from a real-world hyperspectral data and a
challenging synthetic dataset approved that band selection with the wrapper method is more efficient than using target
detection methods without dimensionality reduction, especially in the presence of difficult targets at subpixel level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution satellite images contain rich structural and spatial detail. The availability of high-resolution satellite
images provides easy and cost-effective mapping of land features that was not possible using medium or low resolution
imagery. Since supervised classification methods give better results for accuracy and performance, supervised
classification methods are preferred. By increasing separability of several land-use types, it is possible to group a
satellite image into subparts which lead to solution to land cover mapping problem with application of supervised
classification methods. Supervised classification methods use prior examples as a training to classify other unseen
samples. In this study, a 'traditional' supervised classification method called Maximum Likelihood Classifier (MLC) and
Support Vector Machines (SVM) are compared. No single classifier is proven yet to satisfactorily classify all the basic
land cover classes that mean there is no best classifier yet for both performance and accuracy. However, individual
evaluations together with pros and cons of each method could give insight about applications of the methods compatible
with the intent. MLC is statistical parametric method based on probability calculations, but small dataset size causes
some problems. SVMs as non-parametric method have longer training time with comparable or better accuracy. Thus,
SVM is a good candidate for satellite imagery classification works although its application on satellite image is a pretty
new topic. By applying different penalty value (c) and gamma (γ) parameters in the SVM algorithm, changes in the
classification results could be observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A non-parametric high dynamic range (HDR) fusion approach is proposed that works on raw images of single-sensor
color imaging devices which incorporate the Bayer pattern. Thereby the non-linear opto-electronic conversion
function (OECF) is recovered before color demosaicing, so that interpolation artifacts do not aect the
photometric calibration. Graph-based segmentation greedily clusters the exposure set into regions of roughly
constant radiance in order to regularize the OECF estimation. The segmentation works on Gaussian-blurred
sensor images, whereby the articial gray value edges caused by the Bayer pattern are smoothed away. With
the OECF known the 32-bit HDR radiance map is reconstructed by weighted summation from the dierently
exposed raw sensor images. Because the radiance map contains lower sensor noise than the individual images, it
is nally demosaiced by weighted bilinear interpolation which prevents the interpolation across edges. Here, the
previous segmentation results from the photometric calibration are utilized. After demosaicing, tone mapping is
applied, whereby remaining interpolation artifacts are further damped due to the coarser tonal quantization of
the resulting image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate the application of Morphological Attribute Profiles to both hyperspectral and LiDAR data
to fuse spectral, spatial and elevation data for classification purposes. While hyperspectral data provides a wealth of
spectral information, multi-return LiDAR data provides geometrical information on the elevation and the structure of the
objects on the ground as well as a measure of their laser cross section. Therefore, hyperspectral and LiDAR data are
complementary information sources and potentially their joint analysis can improve classification accuracies.
Morphological Profiles (MPs) and Morphological Attribute Profiles (MAPs) have been successfully used as tools to
combine spectral and spatial information for classification of remote sensing data. MPs and MAPs can also be used with
the LiDAR data to reduce the irregularities in the LiDAR measurements which are inherent with the sampling strategy
used in the acquisition process. Experiments carried out on hyperspectral and LiDAR data acquired on a urban area of
the city of Trento (Italy) point out the effectiveness of MAPs for the classification process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method to reduce spectral dimension based on the phase of integrated bispectrum.
Because of the excellent and robust information extracted from the bispectrum, the proposed method can achieve
high spectral classification accuracy even with low dimensional feature. The classification accuracy of bispectrum
with one dimensional feature is 98.8%, whereas those of principle component analysis (PCA) and independent
component analysis (ICA) are 41.2% and 63.9%, respectively. The unsupervised segmentation accuracy of
bispectrum is also 20% and 40% greater than those of PCA and ICA, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.