Open Access
22 May 2015 Remote detection of flowering Somei Yoshino (Prunus×yedoensis) in an urban park using IKONOS imagery: comparison of hard and soft classifiers
Noordyana Hassan, Shinya Numata, Tetsuro Hosaka, Mazlan Hashim
Author Affiliations +
Abstract
Identification of flowering trees in urban areas is challenging due to weak spectral signals and the high heterogeneity of urban landscapes. We hypothesized that a soft classifier, such as mixture tuned matched filtering (MTMF), would be better able to identify pixels including blooming cherry trees than a hard classifier such as maximum likelihood (ML). To test this hypothesis, we compared the accuracy of MTMF and ML in classifying blossoms of Somei Yoshino cherry trees (Prunus×yedoensis) in an urban park in Tokyo using IKONOS imagery. An accuracy assessment demonstrated that the MTMF classifier (overall accuracy: 62.2%, kappa coefficient: 0.507, and user’s accuracy of SY: 48.1%) performed better than ML in identifying flowering SY (overall accuracy 48.7% with kappa accuracy: 0.321 and user’s accuracy of blooming SY: 38.9%). Our results suggest that both methods are able to classify cherry blossoms in an urban landscape, but MTMF is more accurate than ML. However, the producer’s accuracy of MTMF (72.7%) was slightly lower than ML (77.7%), suggesting that the accuracy of MTMF could decrease due to the limited number of available bands (four for IKONOS) and the existence of endmembers, such as dry grass in this study, with stronger signals than flowers.

1.

Introduction

Plant phenology is gaining attention as an important indicator of global and local climate changes. Ground observations on a large spatial scale are expensive and time consuming, so remotely sensed data have been used to detect changes in plant phenology, such as leaf-out, senescence and dormancy,13 and flowering.37 Most studies have focused on changes in basic vegetation indices, such as the Normalized Difference Vegetation Index (NDVI).35 However, vegetation indices do not utilize the full information content of remotely sensed imagery in the way that image classification methods can,8 especially for phenological events. Vegetation indices typically focus on certain spectral bands that represent the spectral reflectance of canopy greenness, and therefore provide less information on flowering status, flower abundance, and flowering dates.9 Moreover, the spectral bands used by vegetation indices may sometimes represent ground features such as soil that can cause errors in classifying land cover type.10

Image classification approaches have been used to identify tree species and their composition,8 to detect land use changes,11,12 and to identify plant conditions13 based on the spectral signal of canopy greenness. Two approaches have been used in previous studies: (1) hard classification and (2) soft classification. Hard classification selects the class label with the greatest likelihood of being correct and unambiguously assigns each pixel to a single class.14,15 The decision boundaries of the feature space are well defined for hard classification. In soft classification, pixels are assigned based on the relative abundance of each class in the spatially and spectrally integrated multispectrum of each pixel.14 Therefore, the decision boundaries of the feature space are considered fuzzy14 in soft classification because each pixel can have multiple or partial class memberships.15,16 Due to its ability to assign multiple classes to a single pixel, soft classification has been widely used to monitor mineral, soil, and vegetation status, especially in highly heterogeneous areas, because it can divide multiple spectral responses within a pixel and provide proportional information for each class.

Cherry blossoms of Prunus species flower synchronously in the spring in temperate zones of the northern hemisphere. Cherry blossoms are of interest because they provide social and economic benefits from cherry blossom viewing, and they provide important information on the long-term impacts of climate change.17,18 However, identification of cherry blossoms is challenging because urban environments are highly heterogeneous and the flowers produce a weak spectral signal. Therefore, we hypothesized that a soft classifier approach may be more useful to identify cherry blossoms in urban areas due to its ability to separate multiple spectral responses from different land cover types.

In this study, we explore the ability of hard and soft classifiers to identify cherry blossoms in an urban landscape from high-spatial resolution images. We chose the most common cherry cultivar in Japan, Somei Yoshino (hereafter SY) (Prunus×yedoensis), for identification of cherry blossoms. We used maximum likelihood (ML) as a hard classification method and mixture tuned matched filtering (MTMF) as a soft classification method. We compared the accuracy of these two classifiers using high-spatial resolution IKONOS imagery of an urban park in Tokyo, Japan.

2.

Materials and Methodology

2.1.

Study Site

The study was conducted in Yanagisawanoike Park, Hachioji City, Tokyo, Japan (35.6154° N, 139.3767° E, altitude 128 m). The dominant tree cultivar in the park is a deciduous cherry, Somei Yoshino (Prunus×yedoensis). It is mixed with other cherry cultivars, such as Kanzakura (Prunus sato-zakura “Sekiyama”), Mamezakura (Prunus incisa), and Shidarezakura (Prunus sapchiana), as well as other deciduous trees, such as Japanese red pine (Pinus densiflora) and hornbeam (Carpinus laxifolia), and evergreen trees, including camphor (Cinnamomum camphora), Chinese evergreen oak (Quercus mysinaefolia), and Japanese black pine (Pinus thunbergii). The mean canopy size of the flowering SY trees was 5 m and the mean height was 3 m.

2.2.

Materials

2.2.1.

Remotely sensed data

We used a multispectral IKONOS image [four bands: blue (445–516 nm), green (506–595 nm), red (632–698 nm), and near infra-red (NIR; 752–853 nm)] with 4-m resolution. The IKONOS data were recorded over the study area on April 1, 2006, and were purchased from Pasco, Japan. The image was chosen because SY was in full bloom at the time according to information provided by the Japanese Meteorological Agency (JMA). The purchased data were radiometrically corrected and geo-referenced to the Universal Transverse Mercator (UTM) coordinate system, zone 54, WGS84 datum. We conducted reflectance data conversion on the image to estimate areas of blooming SY. To avoid multiple spectral responses, asphalt roads and lakes were masked using a threshold approach. Each feature of the study site in IKONOS image was first digitized and overlaid in Google Earth and was approximately measured.

2.2.2.

Spectral data collection

To validate the spectral reflectance of flowering SY in the IKONOS image, we collected spectral reflectance data of flowering SY in Yanagisawanoike Park using a spectroradiometer (ASD Fieldspec Pro) in April 2014. The data were collected at a spectral range of 0.35-2.5μm with a spectral interval of 3.3 nm. The spectral reflectances of 10 flowers from five blooming SY individuals were measured in a laboratory under dark conditions using a spectroradiometer mounted at a nadir position 20 cm above the target with a 25-deg field of view. We recorded 10 readings for each sample and calculated the average of the spectral data. The sensor was calibrated using a white Spectralon panel prior to data collection.

2.2.3.

Ground data collection

In addition to the spectroradiometer measurements, we collected XY-coordinates of flowering SY trees, soil, dry grass, and evergreen trees using a handheld GPS unit (Garmin GPSmap 60CSx) on April 1, 2014. According to the park manager and Google Earth, the SY trees on this date were the same as in the 2006 imagery. We used these coordinates as reference data to assess classification accuracy.

2.3.

Methodology

2.3.1.

Methods used to identify flowering SY

We used two types of image classifications to identify flowering SY from IKONOS imagery: hard classification and soft, or fuzzy, classification. We used ML for hard classification, as it has been widely used for many purposes, such as discrimination of tree species.19,20 We used MTMF for soft classification because it has been used to identify targets in highly heterogeneous areas, such as urban areas, by decomposing the pixel into its constituent classes and estimating the proportion of each class.

Maximum likelihood classification

To obtain optimal classification using ML, we first examined spatial and spectral information for a set of training pixels. We collected spatial information on texture using the gray level co-occurrence matrix method on the IKONOS image with a 3×3pixels window. We calculated the mean, variance, entropy, homogeneity, contrast, dissimilarity, second moment, and correlation of pixels for each training area (Fig. 1). Because there was spatial variability and contrast among classes, we used textural analysis in addition to the spectral information to improve the classification results.

Fig. 1

Mean values of textural features calculated from training pixels. Textural analysis conducted on the IKONOS image included mean, variance, entropy, homogeneity, contrast, dissimilarity, second moment, and correlations for each class.

JARS_9_1_096046_f001.png

We extracted spectral information from training pixels of the IKONOS image (Fig. 2). The spectral pattern of each class varied enough to discriminate the classes. Dry grass had a higher reflectance, and evergreen trees had a lower reflectance, compared to flowering SY. However, the spectral patterns and magnitudes of soil and evergreen trees were almost identical. Therefore, we conducted a spectral separability test to determine the distinctness of each class.

Fig. 2

Mean spectral signature of the IKONOS image used to select training areas for each class.

JARS_9_1_096046_f002.png

We applied transform divergence (TD) to the IKONOS image to select the features with the greatest degree of statistical separability. TD is used to evaluate spectral variability among classes of training areas. A TD value of 1.90–2.00 indicates good to excellent separation between classes, while a value <1.70 indicates poor class separation.21 The TD results demonstrated good class separability (TD=2.00) among flowering SY, soil, dry grass, and evergreen trees. However, the TD value was 1.73 for flowering SY and dry grass and 1.83 for flowering SY and evergreen trees, indicating weak separabililty of these classes. Soil and evergreen trees had an even lower separability, with a TD value of 1.65. However, we were able to distinguish classes with a lower separability based on spatial evaluation (Fig. 1). Therefore, we used flowering SY, soil, dry grass, and evergreen trees as the training classes for ML classification. To obtain optimal accuracy of the ML classification, we supplemented the four spectral bands of the IKONOS imagery with four bands of local texture information (variance). Thus, a total of eight bands were used in this classification.

Mixture tuned matched filtering

MTMF is a linear process of unmixing that is widely used to identify plant species.2224 There are two phases in the MTMF algorithm: the matched filter (MF) calculation to estimate abundance, and the mixture tuning (MT) calculation to identify false-positive results.

MT assesses the probability of an MF estimation error for each pixel based on mixing feasibility. Abundances in MTMF must obey two critical feasibility constraints: (1) they must be non-negative, and (2) the abundances for each pixel must sum to one. Calculated infeasibility represents the distance of the pixel from the line connecting the target spectrum and the background mean, measured in terms of standard deviations using the appropriate mixing distribution for the MF score of that pixel. MT and MF scores can be jointly interpreted to provide good subpixel detection and false-positive rejection.25

The endmember of MTMF is a spectrum representing ground surface materials.26 In this study, we assigned a single endmember for MTMF classification of flowering SY by selecting 10 pure pixels of flowering SY. We averaged the spectral data from the IKONOS imagery for these 10 data points to create a single composite target spectrum that was used as the endmember for MTMF classification.

2.3.2.

Infeasibility scores

Infeasibility scores are used to confirm the classification of flowering SY from the MTMF classifier. The best match is indicated by an MF score close to one and an infeasibility score close to zero.27 However, according to Brelsford and Shepherd,28 certain spectral signatures can generate large positive MF scores that are indicated as false positives in MTMF. In this study, we used the cumulative distribution function to identify an infeasibility score for 36 points where flowering SY was confirmed by GPS ground truthing. These 36 points were distributed across 40 pixels in the IKONOS imagery. We assigned the MF scores of these 40 pixels to five groups to identify the best infeasibility score, which lies between 0.01 and 0.1 and represents the highest MF score (0.8MF1.2) (Fig. 3).

Fig. 3

Best infeasibility scores for 36 points of flowering SY used to identify the feasibility of matched filter (MF) scores.

JARS_9_1_096046_f003.png

2.3.3.

Accuracy assessment

We assessed the accuracy of MTMF and ML classifications of flowering SY compared to ground-truthed data. We calculated both user’s and producer’s accuracy for both classification methods. According to Congalton and Green,29 producer’s accuracy is the ability of the IKONOS imagery to classify a certain target (number of individual classes correctly classified/total number of reference data), while user’s accuracy is the probability that a classified pixel actually represents that category (number of pixels classified on the map/number of pixels in the image that actually represent that category). The percentage of all classes correctly classified was evaluated using overall accuracy and the kappa coefficient, which measures the level of agreement of the overall accuracy. We calculated the overall accuracy and kappa coefficient as in Eqs. (1) and (2):

Eq. (1)

OA=k=1qnkkn,

Eq. (2)

K^=nk=1qnkkk=1qnk+×n+kn2k=1qnk+×n+k,
where q is the number of rows in the matrix, nkk is the number of observations in row k and column k of the error matrix, nk+ and n+k are the marginal totals of row k and column k, respectively, and n is the total number of observations.

The number of flowering SY trees in Yanagisawanoike Park is limited by the presence of a lake. This made it impossible to take a random sample of at least 50 plots for each land cover class, which is ideal. Laba et al.30 had a similar problem due to the limited areas of certain vegetation classes, and suggested using the largest possible number of plots. The numbers of training and test pixels used for each class in ML and MTMF classifications are shown in Table 1.

Table 1

Number of training pixels and test pixels for each class for maximum likelihood (ML) and mixture tuned matched filtering (MTMF) classifications.

ML classificationMTMF classification
Training pixelsTest pixelsTest pixels
Flowering SY103636
Soil124040
Dry grass184040
Evergreen trees154040

3.

Results

The percentages of each land cover feature in the IKONOS image of the park were: 19% SY trees, 19% asphalt roads, 18% deciduous trees, 17% evergreen trees, 15% pedestrian roads, 8% grass areas, and 4% lake [Fig. 5(a)].

The MF scores, which represent the abundance of pixels in each category, ranged from—2.698 to 2.947 [Fig. 5(b)]. Pixels representing masked asphalt road and lake had negative MF scores. Thus, the MF scores were interpreted as zero target abundance similar to previous studies.3133 The 36 points of flowering SY were distributed across 40 pixels with 0.8MF1.2 [Fig. 5(b)], indicating more than 80% flowering SY per pixel. Pixels with MF scores <0.8 represented bare soil and MF scores >1.2 represented dry grass and evergreen trees. Infeasibility scores from the MTMF classification ranged from 0.01 to 16.854. Each MF score in the MTMF classification had its own infeasibility score that indicated the class to which the pixel belonged. Pixels identified as flowering SY had infeasibility scores ranging from 0.001 to 0.1 (Fig. 4).

Fig. 4

Infeasibility score compared with MF score for different land cover types (evergreen trees, bare soil, SY, and dry grass).

JARS_9_1_096046_f004.png

The IKONOS image used in this study had high variation and contrast among the training classes. Therefore, we supplemented the image with four gray level co-occurrence (variance) bands for the ML classification. However, the TD showed that separability of flowering SY, dry grass, and evergreen trees was poor. The ML classification identified most of the soil pixels as evergreen trees [Fig. 5(c)], even though texture analysis was conducted before ML classification.

Fig. 5

Land classification and identification of flowering SY in Yanagisawanoike Park. (a) Dominant features of the study area, (b) MF scores of MTMF classification (red color represents flowering SY with MF scores ranged from 0.8 to 1.2) and (c) maximum likelihood (ML) classification of the IKONOS image.

JARS_9_1_096046_f005.png

The MTMF classification had 62.2% overall accuracy and a kappa coefficient of 0.507, compared to 48.7% overall accuracy and a kappa coefficient 0.321 for the ML classification. User’s accuracy of the MTMF classification of flowering SY (48.1%) was higher than that of ML classification (39.4%). The poor overall accuracy of the ML classification was primarily due to misclassification of soil (user’s accuracy: 37%, producer’s accuracy: 25%). ML misclassified 60.6% of flowering SY as dry grass or evergreen trees [Fig. 5(c)]. However, the producer’s accuracy of the MTMF classification (72.2%) was lower than that of the ML classification (77.7%). MTMF tended to misclassify flowering SY as dry grass or soil.

4.

Discussion

Our results indicate that, in terms of overall accuracy and Kappa coefficient, MTMF classified flowering SY in an urban park more accurately than ML. However, the producer’s accuracy of the MTMF classification was slightly lower than the ML due to misclassification of flowering SY pixels as soil or dry grass (Table 2). This may be due to the limited number of available bands for ML (four bands for IKONOS). MTMF can achieve higher classification accuracy by using hyperspectral data. Williams and Hunt22 demonstrated that MTMF classification worked well to identify leafy spurge in hyperspectral airborne visible infrared imaging spectrometer (AVIRIS) images. In addition, the existence of an endmember with a stronger signal than flowers, such as dry grass in this study, may have limited the user’s accuracy of MTMF classification. Therefore, additional endmembers may be needed to improve the performance of MTMF for classifying flowering SY trees.

Table 2

Accuracy assessment for maximum likelihood (ML) and mixture tuned matched filtering (MTMF) classifications of flowering SY trees. The values for each class represent the number of ground-truthed points used to evaluate the accuracy of classification.

ML classifier
ClassLabelReferenceSumUser’s accuracy (%)Overall accuracy (%)Kappa coefficient
ABCD
Somei YoshinoA28720167139.448.70.321
SoilB110222237
Dry grassC531822864.2
Evergreen treesD2200204247.6
Sum36404040156
Producer’s accuracy (%)77.7254550
MTMF classifier
Somei YoshinoA26101085448.162.20.507
SoilB526554163.4
Dry grassC542553964.1
Evergreen treesD0002020100
Sum36404040156
Producer’s accuracy (%)72.26562.550

In contrast, the ML classifier identified flowering SY with a relatively high producer’s accuracy (Table 2). However, misclassification of soil as evergreen trees may be the cause of the low overall accuracy. Most of the pixels representing soil were assigned as evergreen trees, and pixels of deciduous trees were often assigned as soil. Cherry blossoms precede the leaf flushing of other deciduous trees, which had no leaves at the time of the imagery. Because soil has higher reflectance than branches or trunks, deciduous trees were often misclassified. Therefore, adding a training class for deciduous trees could improve the accuracy of ML classification.

Plant leaves, rather than flowers, have often been used35 to observe plant phenology from remotely sensed data because the spectral signal of flowers is generally weaker than that of leaves. We confirmed that cherry blossoms of SY have weaker spectral signals than dry grass (Fig. 2), but MTMF classification has considerable potential in terms of enabling their accurate separation (Figs. 3 and 5).

5.

Conclusion

Our results suggest that MTMF classification is more accurate than ML classification for identifying plant flowering phenology in a highly heterogeneous urban landscape. However, the number of spectral bands can limit the producer’s accuracy of MTMF classification. Therefore, utilization of hyperspectral data with high-spatial resolution such as AVIRIS might be useful for identifying flowering phenology in urban ecosystems.

References

1. 

B. C. Reed et al., “Measuring phenological variability from satellite imagery,” J. Veg. Sci., 5 (5), 703 –714 (1994). http://dx.doi.org/10.2307/3235884 JVESEK 1100-9233 Google Scholar

2. 

X. Zhang et al., “Global vegetation phenology from AVHRR and MODIS data,” in Proc. IEEE 2001 Int. Geosci. and Remote Sensing Symp. (IGARSS 2001), (2001). http://dx.doi.org/10.1109/IGARSS.2001.977969 Google Scholar

3. 

X. Zhang et al., “Monitoring vegetation phenology using MODIS,” Remote Sens. Environ., 84 (3), 471 –475 (2003). http://dx.doi.org/10.1016/S0034-4257(02)00135-9 RSEEA7 0034-4257 Google Scholar

4. 

N. Delbart et al., “Remote sensing of spring phenology in boreal regions: a free of snow-effect method using NOAA-AVHRR and SPOT-VGT data (1982–2004),” Remote Sens. Environ., 101 (1), 52 –62 (2006). http://dx.doi.org/10.1016/j.rse.2005.11.012 RSEEA7 0034-4257 Google Scholar

5. 

D. E. Ahl et al., “Monitoring spring canopy phenology of a deciduous broadleaf forest using MODIS,” Remote Sens. Environ., 104 (1), 88 –95 (2006). http://dx.doi.org/10.1016/j.rse.2006.05.003 RSEEA7 0034-4257 Google Scholar

6. 

S. Testa, “Correcting MODIS 16-day composite NDVI time-series with actual acquisition dates,” European J. Remote Sens., 47 285 –305 (2014). http://dx.doi.org/10.5721/EuJRS20144718 Google Scholar

7. 

S. L. Boulter, R. L. Kitching and B. G. Howlett, “Family, visitors and the weather: patterns of flowering in tropical rain forests of northern Australia,” J. Ecol., 94 (2), 369 –382 (2006). http://dx.doi.org/10.1111/jec.2006.94.issue-2 JECOAB 0022-0477 Google Scholar

8. 

G. M. Foody and M. E. J. Cutler, “Mapping the species richness and composition of tropical forests from remotely sensed data with neural networks,” Ecol. Modell., 195 (1–2), 37 –42 (2006). http://dx.doi.org/10.1016/j.ecolmodel.2005.11.007 ECMODT 0304-3800 Google Scholar

9. 

J. Chen et al., “Indicator of flower status derived from in situ hyperspectral measurement in an alpine meadow on the Tibetan Plateau,” Ecol. Indic., 9 (4), 818 –823 (2009). http://dx.doi.org/10.1016/j.ecolind.2008.09.009 EICNBG 1470-160X Google Scholar

10. 

A. A. Huete, “Soil-adjusted vegetation index (SAVI),” Remote Sens. Environ., 25 (3), 295 –309 (1988). http://dx.doi.org/10.1016/0034-4257(88)90106-X RSEEA7 0034-4257 Google Scholar

11. 

C. Munyati, “Wetland change detection on the Kafue Flats, Zambia, by classification of a multitemporal remote sensing image dataset,” Int. J. Remote Sens., 21 (9), 1787 –1806 (2010). http://dx.doi.org/10.1080/014311600209742 IJSEDK 0143-1161 Google Scholar

12. 

J. T. Kerr and M. Ostrovsky, “From space to species: ecological applications for remote sensing,” Trends Ecol. Evol., 18 (6), 299 –305 (2003). http://dx.doi.org/10.1016/S0169-5347(03)00071-5 TREEEQ 0169-5347 Google Scholar

13. 

Y. Zhang et al., “Unsupervised subpixel mapping of remotely sensed imagery based on fuzzy C-means clustering approach,” IEEE Geosci. Remote Sens. Lett., 11 (5), 1024 –1028 (2014). http://dx.doi.org/10.1109/LGRS.2013.2285404 IGRSBY 1545-598X Google Scholar

14. 

Remote Sensing: Models and Methods for ImageProcessing, 3rd ed.Academic Press, New York (2006). Google Scholar

15. 

G. M. Foody, “Hard and soft classifications by a neural network with a non-exhaustively defined set of classes,” Int. J. Remote Sens., 23 (18), 3853 –3864 (2010). http://dx.doi.org/10.1080/01431160110109570 IJSEDK 0143-1161 Google Scholar

16. 

F. Wang, “Fuzzy supervised classification of remote sensing images,” IEEE Trans. Geosci. Remote Sens., 28 194 –201 (1990). http://dx.doi.org/10.1109/36.46698 IGRSD2 0196-2892 Google Scholar

17. 

Y. Aono, “Cherry blossom phenological data since the seventeenth century for Edo (Tokyo), Japan, and their application to estimation of March temperatures,” Int. J. Biometeorol., 59 (4), 1 –8 (2014). http://dx.doi.org/10.1007/s00484-014-0854-0 IJBMAO 0020-7128 Google Scholar

18. 

Y. Aono and S. Saito, “Clarifying springtime temperature reconstructions of the medieval period by gap-filling the cherry blossom phenological data series at Kyoto, Japan,” Int. J. Biometeorol., 54 (2), 211 –219 (2010). http://dx.doi.org/10.1007/s00484-009-0272-x IJBMAO 0020-7128 Google Scholar

19. 

Y. Dian et al., “Comparison of the different classifiers in vegetation species discrimination using hyperspectral reflectance data,” J. Indian Soc. Remote Sens., 42 (1), 61 –72 (2014). http://dx.doi.org/10.1007/s12524-013-0309-9 Google Scholar

20. 

X. Miao et al., “Detection and classification of invasive saltcedar through high spatial resolution airborne hyperspectral imagery,” Int. J. Remote Sens., 32 (8), 2131 –2150 (2011). http://dx.doi.org/10.1080/01431161003674618 IJSEDK 0143-1161 Google Scholar

21. 

Introductory of Digital Image Processing: A Remote Sensing Perspective, 3rd ed.Prentice Hall, USA (2005). Google Scholar

22. 

A. P. Williams and E. R. Hunt, “Estimation of leafy spurge cover from hyperspectral imagery using mixture tuned matched filtering,” Remote Sens. Environ., 82 (2–3), 446 –456 (2002). http://dx.doi.org/10.1016/S0034-4257(02)00061-5 RSEEA7 0034-4257 Google Scholar

23. 

R. Dehaan et al., “Discrimination of blackberry (Rubus fruticosus sp. agg.) using hyperspectral imagery in Kosciuszko National Park, NSW, Australia,” ISPRS J. Photogramm. Remote Sens., 62 (1), 13 –24 (2007). http://dx.doi.org/10.1016/j.isprsjprs.2007.01.004 IRSEE9 0924-2716 Google Scholar

24. 

J. J. Mitchell and N. F. Glenn, “Subpixel abundance estimates in mixture-tuned matched filtering classifications of leafy spurge (Euphorbia esula L.),” Int. J. Remote Sens., 30 (23), 6099 –6119 (2009). http://dx.doi.org/10.1080/01431160902810620 IJSEDK 0143-1161 Google Scholar

25. 

J. W. Boardman and F. A. Kruse, “Analysis of imaging spectrometer data using N-dimensional geometry and a mixture-tuned matched filtering approach,” IEEE Trans. Geosci. Remote Sens., 49 (11), 4138 –4152 (2011). http://dx.doi.org/10.1109/TGRS.2011.2161585 IGRSD2 0196-2892 Google Scholar

26. 

J. B. Adams and A. R. Gillespie, Remote Sensing of Landscapes with Spectral Images: A Physical Modeling Approach, 362 Cambridge University Press, Cambridge (2006). Google Scholar

27. 

R. Sugumaran, J. Gerjevic and M. Voss, “Transportation infrastructure extraction using hyperspectral remote sensing,” Remote Sensing of Impervious Surfaces, 163 –178 CRC Press/Taylor and Francis, Boca Raton, Florida (2007). Google Scholar

28. 

C. Brelsford and D. Shepherd, “Using mixture-tuned match filtering to measure changes in subpixel vegetation area in Las Vegas, Nevada,” J. Appl. Remote Sens., 8 (1), 083660 (2014). http://dx.doi.org/10.1117/1.JRS.8.083660 1931-3195 Google Scholar

29. 

R. G. Congalton and K. Green, Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 137 Lewis Publisher, New York (1999). Google Scholar

30. 

M. Laba et al., “Use of textural measurements to map invasive wetland plants in the Hudson River National Estuarine Research Reserve with IKONOS satellite imagery,” Remote Sens. Environ., 114 (4), 876 –886 (2010). http://dx.doi.org/10.1016/j.rse.2009.12.002 RSEEA7 0034-4257 Google Scholar

31. 

T. T. Sankey et al., “Characterizing Western Juniper expansion via a fusion of Landsat 5 Thematic Mapper and lidar data,” Rangeland Ecol. Manage., (2010). http://dx.doi.org/10.2111/REM-D-09-00181.1 Google Scholar

32. 

J. T. Mundt, D. R. Streutker and N. F. Glenn, “Partial unmixing of hyperspectral imagery: theory and methods,” (2007). Google Scholar

33. 

P. R. Robichaud et al., “Postfire soil burn severity mapping with hyperspectral image unmixing,” Remote Sens. Environ., 108 467 –480 (2007). http://dx.doi.org/10.1016/j.rse.2006.11.027 RSEEA7 0034-4257 Google Scholar

Biography

Noordyana Hassan received her master’s degree in remote sensing from Universiti Teknologi Malaysia, Malaysia, in 2012. Currently, she is a PhD candidate at Tokyo Metropolitan University, Japan. Her current research focused on phenology of urban flowering plants.

Shinya Numata received his PhD in the Department of Biological Science at Tokyo Metropolitan University in 2001. Currently, he is an associate professor in the Department of Tourism Science at Tokyo Metropolitan University, Japan. His research focuses on tropical forest ecology, urban biodiversity and its management, and conservation and management of protected areas.

Tetsuro Hosaka received his PhD from Faculty of Agriculture at Kyoto University in 2010. Currently, he is an associate professor in the Department of Tourism Science at Tokyo Metropolitan University, Japan.

Mazlan Hashim received his PhD in environmental remote sensing at the University of Stirling in 1995. Currently, he is a professor and director of the Institute of Geospatial Science and Technology (INsTEG) at the University Teknologi Malaysia, Malaysia.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Noordyana Hassan, Shinya Numata, Tetsuro Hosaka, and Mazlan Hashim "Remote detection of flowering Somei Yoshino (Prunus×yedoensis) in an urban park using IKONOS imagery: comparison of hard and soft classifiers," Journal of Applied Remote Sensing 9(1), 096046 (22 May 2015). https://doi.org/10.1117/1.JRS.9.096046
Published: 22 May 2015
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Earth observing sensors

High resolution satellite images

Image classification

Reflectivity

Vegetation

Accuracy assessment

Roads

Back to Top