In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and
crucial information such as direction, position and size related to the target are lost. If the target reappears at a
later frame, it may not be tracked again because the 3D orientation, size and location of the target might be
changed. We have proposed two methods: FKT-DCCF and FKT-PDCCF. We have implemented several tests
by using DCCF and PDCCF when a target reappears to field-of-view. At the end of those test results we
compared the DCCF and PDCCF methods by looking the percentage of correctly identified targets. Test
results using both mid-wave and long-wave FLIR sequences (M1415, L1805, L1702, L1920, L19NS, and
L1915) are incorporated to verify the effectiveness of the proposed algorithm.
In a FLIR image sequence, a target may disappear permanently or may reappear after some frames and
crucial information such as direction, position and size related to the target are lost. If the target reappears at a
later frame, it may not be tracked again because the 3D orientation, size and location of the target might be
changed. To obtain information about the target before disappearing and to detect the target after reappearing,
distance classifier correlation filter (DCCF) is trained manualy by selecting a number of chips randomly. This
paper introduces a novel idea to eliminates the manual intervention in training phase of DCCF. Instead of
selecting the training chips manually and selecting the number of the training chips randomly, we adopted the
K-means algorithm to cluster the training frames and based on the number of clusters we select the training
chips such that a training chip for each cluster. To detect and track the target after reappearing in the field-ofview
,TBF and DCCF are employed. The contduced experiemnts using real FLIR sequences show results
similar to the traditional agorithm but eleminating the manual intervention is the advantage of the proposed
algorithm.
Recently, spectral information is introduced into face recognition applications to improve the detection
performance for different conditions. Besides the changes in scale, orientation, and rotation of facial images,
expression, occlusion and lighting conditions change the overall appearance of faces and recognition results. To
eliminate these difficulties, we introduced a new face recognition technique by using the spectral signature of facial
tissues. Unlike alternate algorithms, the proposed algorithm classifies the hyperspectral imagery corresponding to
each face into clusters to automatically recognize the desired face and to eliminate the user intervention in the data
set. The K-means clustering algorithm is employed to accomplish the clustering and then Mahalanobis distance is
computed between the clusters to identify the closest cluster in the data with respect to the reference cluster. By
identifying a cluster in the data, the face that contains that cluster is identified by the proposed algorithm. Test
results using real life hyperspectral imagery shows the effectiveness of the proposed algorithm.
K-means clustering method has been employed in different applications of data analysis. This paper develops a
target detection system using the k-means algorithm including a preprocessing step based on the Euclidean
distance. The pre-processing step reduces the computational complexity of the k-means algorithm in case of
hyperspectral imagery. After reducing the set of pixels in the background from the data by using the pre-processing
step, k-means algorithm is employed to determine the clusters in rest of the image data cube. Having obtained the
clustered data, the objects of interest can easily be detected using the known target signature. The proposed
clustering algorithm is successfully applied to the real life hyperspectral data sets where the objects of interest can
efficiently be detected. The proposed scheme effectively reduces the convergence time of the k-mean algorithm
compared to that required by the traditional k-means algorithm.
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.