Open Access
1 August 2008 Importance of detection for video surveillance applications
Javier Varona, Jordi Gonzalez, Ignasi Rius, Juan Jose Villanueva
Author Affiliations +
Abstract
Though it is the first step of a real video surveillance application, detection has received less attention than tracking in research on video surveillance. We show, however, that the majority of errors in the tracking task are due to wrong detection. We show this by experimenting with a multi object tracking algorithm based on a Bayesian framework and a particle filter. This algorithm, which we have named iTrack, is specifically designed to work in practical applications by defining a statistical model of the object appearance to build a robust likelihood function. Likewise, we present an extension of a background subtraction algorithm to deal with active cameras. This algorithm is used in the detection task to initialize the tracker by means of a prior density. By defining appropriate performance metrics, the overall system is evaluated to elucidate the importance of detection for video surveillance applications.

1.

Introduction

Monitoring public and private sites by means of human operators presents several problems, such as the need to monitor a great number of cameras at the same time and to minimize the operator’s distractions. Therefore, a computer vision system has to be able to assist humans. The main difficulties of an automatic video surveillance system are due to the variety of scenes and acquisition conditions. It is possible to design systems with one or more cameras, which can be static or mobile, with different sensors such as color or infrared cameras. In this paper, we deal with large outdoor scenes using an active color camera.

Typically, an automatic video surveillance system involves the following tasks: detection, tracking, and event recognition. The detection task locates objects in the images. Then, the objects’ positions are robustly estimated over time by the tracking task. Lastly, the goal of the event recognition module is to describe what is happening in the scene.

There are two main approaches for object detection in automatic video surveillance applications: temporal differences and background subtraction. Frame differencing performs well in real time, but fails when a tracked object ceases its motion.1 Background subtraction is based on statistical models in order to build the appearance model of a static scene.2 Both methods usually require the use of a static camera. Recently, advances in algorithms for robust real-time object detection allow their use in video surveillance applications. These algorithms perform a search in the image to find previously learned objects such as pedestrians.3 An important advantage of these algorithms is that they are not restricted to a static camera. In view of this, we present an algorithm that can be used with active cameras. This algorithm allows the application of background subtraction techniques to panoramic scenes typical of video surveillance applications.

Referring to the tracking module, there are works based on a combination of different computer vision algorithms that perform properly in real environments.4, 5 The main difficulty of these tracking algorithms is representing objects’ trajectories when new objects appear, when they are occluded, or when they disappear. To manage these cases one needs a process of data association, usually based on heuristics. Another possibility is to use a particle filter.6 Particle filters are a possible implementation of optimal Bayesian estimation.7 They can manage multimodal densities to represent the state of multiple objects. However, it is necessary to use an adequate state representation to apply these filters to multiobject tracking. It is possible to include all objects and the background in the state estimation.8 But this approach may require an extremely large number of samples.

Instead, we present a tracking algorithm for the management of multiobject tracking by augmenting the state of each tracked object with a label to identify the object. This scheme is completed with a likelihood function whose definition is directly based on the image values of the objects to be tracked. This model can be updated to allow for changes in the object’s appearance. Therefore, the algorithm does not depend on environmental conditions, and it can be used in different application scenarios because it does not require any a priori knowledge about either the scene or the appearance and number of agents. It is only necessary to define an appropriate prior density that relates detection and tracking to adapt the application to several scenarios. By means of a proper evaluation of the video surveillance system, we are able to show the relationships between detection and tracking tasks. Specifically, we prove by experimenting how the performance of the tracking algorithm is affected by the presence of detection errors.

In this paper, we first define a visual tracking method suitable for video surveillance applications. This method is based on a Bayesian framework and a particle filter. Subsequently, we present a background subtraction algorithm for active cameras that is used by the detection task to locate the objects of interest in the scene. In addition, we present a proper definition of a prior density, which relates to both detection and tracking. Finally, performance metrics are defined to evaluate the behavior of the complete system. The obtained results are discussed in the last section to demonstrate the importance of detection for obtaining good results in the tracking task.

2.

Image-Based Tracking: iTrack

In this section, we define an estimation algorithm to track people in video surveillance applications, which we have named iTrack. This algorithm is based on the Bayesian probabilistic framework and implemented by using a particle filter. The algorithm’s basic idea is to estimate the state of the object to be tracked by using a likelihood function that is based only on image data. This idea is formalized by defining an appearance model that is continuously updated to take into account the objects’ appearance changes. In addition, by using a particle filter, the detection results are easily included in the estimation algorithm by introducing new particles from a prior density. Then, the algorithm can be used in different application environments without significant changes.

Let st=(xt,ut,wt,Mt) be the state vector for an object, where xt=(xt,yt) is the position, ut=(ut,vt) the velocity, wt=(wt,ht) the size, and Mt the appearance of the object (see Fig. 1).

Fig. 1

State components.

087201_1_1.jpg

Given a sequence of images, I1:t=(I1,,It) , the posterior probability density of the object’s state at time t is expressed as

Eq. 1

p(stI1:t)=p(s1:tI1:t)ds1:t1,
where s1:t is the object state history, s1:t=(s1,,st) . Applying the Bayes rule and the Markov condition, we obtain

Eq. 2

p(stI1:t)p(Itst)p(stst1)p(st1I1:t1)dst1,
where p(Itst) is the likelihood function.

The integral in Eq. 2 is referred to as the temporal prior or the prediction, and p(stst1) is the motion model. In order to define the motion model we assume the following independent relations between the state parameters:

p(xt,ut,wt,Mtxt1,ut1,wt1,Mt1)=p(xtxt1,ut1)p(utut1)p(wtwt1)p(MtMt1).
We use the smooth motion model for the position, velocity, and size parameters, i.e.,
p(xtxt1,ut1)=η(xt(xt1+ut1),σx),
p(utut1)=η(utut1,σu),
p(wtwt1)=η(wtwt1,σw),
where η(μ,σ) denotes a Gaussian density with mean μ and standard deviation σ . The deviations σx σu , and σw are defined empirically. To complete the motion model, it is necessary to define the appearance evolution, p(MtMt1) . Using probabilistic terms, the density of the appearance model is defined as

Eq. 3

p(MtMt1)=δ(MtMt1),
where δ() is a Dirac delta function. This model was also used for 3-D people-tracking.9

2.1.

Appearance Model for the Likelihood Function

To compute the recursive expression 2 we also need a likelihood function, i.e., p(Itxt,ut,wt,Mt) . This function is the probability of observing the image It given the object parameters. First, we observe that the likelihood function is independent of the velocity parameter. The parameters xt and wt define an image region denoted as Ip . In order to compare this image region with the object appearance model Mt , we apply an affine transformation to the image region:

Eq. 4

R=AIp,
where A is an affine matrix transform containing translation and scale parameters. Finally, the complete likelihood function is expressed as

Eq. 5

p(Itxt,wt,Mt)=p(RMt),

Eq. 6

p(RMt)=1Ni,jRpij(RijMij,t),
where N is the number of the region’s pixels, and pij is the probability that the value of the pixel (i,j) belongs to the distribution of the pixel’s appearance model and is defined as

Eq. 7

pij(RijMij,t)=η(RijMij,t,σM),
where η() is a Gaussian density whose standard deviation σM allows for small changes in object appearance and acquisition noise. A similar appearance model for dynamic layers is presented in Ref. 10. The main difference is that model is based on a generalized EM algorithm instead of a particle filter to continuously estimate objects over time. This definition of the likelihood function is robust to outliers, because their presence (due to clutter and occlusions) does not penalize the overall probability measurement.

The expression 3 means that the object appearance does not change over time. Thus, it is necessary to adjust the model after each estimation step for a correct appearance model. Once the new state has been estimated, p(stI1:t) , the appearance model is updated using an adaptive rule for each pixel of the model,

Eq. 8

μij,t=μij,t1+α(Rij,tμij,t1),
where Ri,j,t is the appearance value of pixel (i,j) of the region obtained with the new state parameters. To learn the coefficient α , we use the temporal adjustment

Eq. 9

αt=et.

We have chosen this approximation because the best estimations are computed during the first frames.

The results on the expected positions and the marginal density for the x position of different test sequences are shown in Fig. 2. In the marginal density for the x position it can be seen the multimodality of the posterior density in the multiple-object tracking case.

Fig. 2

Multiple-object tracking results using iTrack. At the bottom of each frame is displayed the horizontal position marginal density.

087201_1_2.jpg

2.2.

Algorithm

In order to make multiple-object tracking possible, it is necessary to represent a multimodal density. Using the Condensation algorithm, we can implement the probabilistic model by means of a particle filter.6 Therefore, the conditional state density, p(stI1:t) , is represented by a sample set {st(n)} , n=1,,N . In order to represent a multimodal density and to identify each object, we use an augmented state adding a label l . The label l associates one specific appearance model to the corresponding samples, allowing the computation of the likelihood function of Eq. 6. Thus, the sample vector is given by

sti=(xti,uti,wti,l)sti,l=(xti,uti,wti).

From the propagated samples {sti} , that represent the posterior at time t , the state estimation for the object labeled L is computed as the mean of their samples, i.e.,

Eq. 10

ŝL,t=1NLi,l=Lsti,l,
where NL is the number of samples for the object L . However, as the estimation progresses over many frames this representation may increasingly bias the posterior density estimates towards objects with dominant likelihood.11 This occurs because the probability of propagating a mode is proportional to the cumulative weights of the samples that constitute it. In order to avoid single target modes absorbing other target samples, weights are normalized according to

Eq. 11

π̂ti,l=πti,li=1,j=lNπti,j1NO,
where NO is the number of objects being tracked. Each weight is normalized according to the total weight of the target’s samples. Thus, all targets have the same probability of being propagated.12 The complete algorithm is described in Table 1.

Table 1

iTrack algorithm.

The posterior density at time t1 is represented by the sample set, {st1i} , where i={1,,N} .
In addition, the prior density p(st) for time t is assumed to be known at this stage.
Generate the i ’th sample of N that represents the posterior at time t as follows:
     1. Predict: Generate a random number α[0,1) uniformly distributed:
       (a) If α<r , use the prior p(st) to generate sti, .
       (b) If αr , apply the motion model to the sample st1i : sti,=p(stst1=st1i) using the smooth motion model
          xti,=xt1i+ut1i+ξxi ,
          uti,=ut1i+ξui ,
          wti,=wt1i+ξwi .
     2. Correct: Measure and weight the new sample, sti, , in terms of image data It , using the likelihood function of Eq. 6:
         πti,l=p(Itxt=xti,,wt=wti,,Mt1l) .
Once the N samples have been generated, normalize the weights applying Eq. 11, andbuild the cumulative probabilities:
      ct0=0 ,
      cti=cti1+πti , i=1,,N .
Use the values of the cumulative probabilities to generate by sampling the new samples {sti} thatrepresent the posterior at time t .
For each object, estimate the new state by computing the mean of its samples:
      ŝL,t=1NLi,l=Lsti ,
where NL is the number of samples for object L
Finally, use the new state to actualize the appearance model.

3.

Detection

The iTrack algorithm requires a prior density, p(st) , for the tracking process to be initialized. Subsequently, this prior density is used to initialize new objects appearing in the scene. In this section, we define the prior density by using the results obtained in the detection task. First, we present a background subtraction algorithm for active cameras that is used for locating the objects of interest in the scene. This method is an extension of a robust background subtraction algorithm for active cameras.13 It uses a Gaussian-mixture-based adaptive background modeling. In this way, it is robust to changes in the scene that are not due to the objects of interest. The problem of this algorithm is that it requires a static scene. To solve this problem it is possible to make a scene set with one image for each acquisition parameter of the active camera. However, that is impractical due to the great number of active parameters of the camera. To address this problem, one could find the minimum set of the camera’s parameters for seeing the entire surveillance perimeter and constraint the camera motions to these parameters.14 A less expensive method is to model the scene like a panorama.15, 16 Therefore, our objective is to use the Mixture-of-Gaussians scene model for active cameras by means of a panoramic representation of the scene.

3.1.

Panoramic Scene Model

In video surveillance applications, for monitoring a wide area with enough image resolution, active cameras are usually used. These cameras scan the entire surveillance perimeter to detect events of interest. Another possibility is to use a static camera with a wide field of view to locate objects and an active camera to track them. However, this approximation needs geometric and kinematic coupling between the two cameras.17 Therefore, we focus on using an active camera with pan and tilt degrees of freedom.

First, we explain how to build the panorama by assuming that the camera rotates about its optical center. In order to build the panorama, it is necessary to transform each image into a sphere corresponding to the camera field of view. Next, we convert the spherical panorama into a planar surface to represent the scene model. In addition, we assume that it is possible to know the camera’s parameters to make our scene model—in our case, the pan degree of freedom θ , the tilt degree ψ , and the camera’s field of view in both directions, β and γ (we assume a fixed focal length; therefore we do not consider the zoom parameter). First, in order to project each point (x,y) of the camera into the sphere, we apply

Eq. 12

θx=θ+xβSx,

Eq. 13

ψx=ψ+yγSy,
where Sx and Sy are the horizontal and vertical image sizes, respectively, and x,y are the pixel coordinates with respect to the image center, that is, x[Sx2,Sx2] and y[Sy2,Sy2] . Next, the image axes are matched with the angular values of the transformed pixels by the previous expressions. For example, in Fig. 3 we show a panorama that corresponds to 100deg for pan values and 20deg for tilt values. To avoid lens distortions, we only take into account the central region of each image.

Fig. 3

Panorama for parking monitoring.

087201_1_3.jpg

This process to create panoramas has two main problems: brightness changes and the appearance of parts of objects in motion. The first problem is due to the nonuse of brightness correction functions. The second problem occurs if there are objects in motion when the images are taken to build the panorama. In our approach, we use the scene model based on the mixture-of-Gaussians parameterization for each pixel. As a result, both problems are solved. The process consists in creating the panorama following the previously explained steps and using the mean value of the most probable Gaussian as panorama pixel value.

Once the panoramic scene is modeled, in order to make object detection, the known values of the pan and tilt camera parameters are used to obtain the desired piece of the panoramic scene model. This process is carried out by indexing the scene when it is created. To speed up computation, we use a lookup table with the correspondence between the position of each pixel within the image and its position in the panorama for the different camera’s parameters (see Fig. 4).

Fig. 4

Indexation of a panoramic scene model.

087201_1_4.jpg

In Fig. 4 it can be seen that both problems in building panoramas—brightness uniformity and objects in motion—are solved by using the panoramic scene model based on mixtures of Gaussians. For each registered piece of the scene, it is possible to detect the objects in motion. Therefore, they can be deleted for building the panorama, and they do not appear in the final scene model. In addition, by creating the panorama using the mean values of the most probable Gaussian density, the problem of change of brightness between consecutive pieces is eliminated. For these two reasons, our method provides a robust way of creating panoramas in the presence of objects in motion and smooth changes of illumination. This is an improvement over the methods that create panoramas assuming static scenes. However, we want to point out that a similar algorithm, presented in Ref. 18 can be viewed as a generalization of this idea for nonvideo surveillance applications. This background subtraction algorithm has been applied to scene modeling using sequences from hand-held cameras.

In order to evaluate our panoramic scene model, two classes of objects have been defined: pedestrians and other. The “other” class includes cars and algorithm errors like reflections or small movements of the objects in the scene. In order to classify objects, we have modeled the size’s characteristics of the bounding box of the detected object, as well as its temporal continuity. Objects detected during k consecutive frames have been associated by proximity. During the k images, the object is classified separately, and later, a voting process is made to give its final class.

As we measure the performance of algorithms, our interest lies in knowing the number of false positives, or index of false alarms, and the number of false negatives, or index of losses. To know the index of losses, it is necessary to monitor all the experiments. However, our objective is to define one evaluation method that does not need human monitoring, i.e., an automatic evaluation. Therefore, we only measure false positives, by storing the images of all objects detected by the algorithm. In this way, at the end of the experiment, it is possible to determine the total number of objects and the number of false alarms for each type of object. Formally, we define the index of false alarms, mi , for each type of object i as

Eq. 14

mi=(1BiAi),
where Ai is the number of objects detected of class i , and Bi the number of objects classified correctly. A good classification gives a value of mi next to 0.

The results obtained after running the algorithm for several hours in real environments are in Table 2. The algorithm has been implemented on a platform PC and runs at 25framess using a Sony EVI-D31 pan-tilt video camera. Using this active camera, we build the panoramic scene model by setting the appropriate pan and tilt values to cover the entire scene.

Table 2

Evaluation of the panoramic scene model.

ClassDetected objects Ai False positives Ai−Bi Index of false alarms mi
Pedestrians82220.27
Other993700.07

It is necessary to point out that although there are errors, the majority of them are not critical. For example, although a pedestrian is not classified correctly in the first k frames, in the following ones this can be remedied. Figure 5 shows some of the objects classified as pedestrians.

Fig. 5

Detected pedestrians.

087201_1_5.jpg

3.2.

The Prior Density

As it has been explained, the prior density p(st) is used to initialize the tracking process at the first frame and to initialize new objects as they appear in the scene. We define the prior density by expressing in a probabilistic way the foreground regions segmented according to the previously explained panoramic scene model.

The sample vector at time t is given by sti,l=(xti,uti,wti) , where xti is the position, uti the velocity, and wti the size of the object. Then, in order to define p(st) it is necessary to define a prior density for the position p(x) , the velocity p(u) , and the size p(w) , components of the object’s state vector.

By means of the detection algorithm, pixels are classified into two categories: foreground and background. By grouping foreground pixels into blobs it is possible to define the prior density of the position component, x , by means of a mixture-of-Gaussians density

Eq. 15

p(x)=j=1BP(j)p(xj),
where B is the number of blobs detected [so P(j)=1B ] and p(xj)=η(bj,ΣB) . Here bj is the blob mean position, and ΣB is a constant for all blobs, which is fixed a priori. Similarly, the size component w is formulated in terms of the blob’s size. This prior density formulation was used in Ref. 19 as an importance function in the combination of low-level and high-level approaches for hand tracking.

In connection with the velocity component, there are three different initialization possibilities. The first possibility is to set the velocity to zero, but then the problem is that the object usually appears in the scene in motion. In that case, this would cause an important difference between the velocity estimation and the real velocity; hence the algorithm would take time in becoming stabilized. The consequence is that the appearance model could be corrupted and the tracking algorithm would not work correctly. The second possibility is, by using an optical flow estimation algorithm, to make an initial estimation of the object’s velocity by considering the mean displacement of the blob’s pixels. The problem of this approach is that optical flow computation would slow down the whole process. In addition, in the case of the object’s occlusion, this estimation would be wrong. The third possibility is to assume the object’s temporal continuity, that is, to detect the object in several consecutive frames to ensure that it is not a detection error. In these consecutive frames, detections are associated using only distances between the position components. Once the object is considered as stable, it is possible to make the initial estimation of its velocity.

All these possibilities have been tested, and the last gave the best results. Temporal continuity ensures a good initialization for the object. The problem is that the object takes more time to start being tracked. The main conclusion is that one frame is not enough to initialize correctly the objects to track. Hence, it is necessary to make a more accurate initialization, to detect the object in several consecutive frames to ensure the algorithm’s correct performance.

4.

Performance Evaluation

Nowadays, the evaluation of visual tracking algorithms still constitutes an open problem. This fact was shown at the International Workshops of Performance Evaluation of Tracking and Surveillance (PETS).20 Two main objectives of PETS are the development of a common evaluation framework and the establishment of a reference set of sequences to allow true comparison between different approaches. From this reference set, a 20-s test sequence has been selected, where four different pedestrians appear. We use this sequence to present the results of our proposed performance evaluation procedure and to show the results of the iTrack algorithm using the target detection algorithm described in the previous section.

In essence, metrics that allow evaluating the performance of a visual tracking algorithm require generating the ground-truth trajectory for each object. However, it is very hard to obtain ground-truth data in video surveillance applications. Another possibility is defining a set of heuristic measures that can be related to the visual tracking application.21 These heuristic measures are divided into two groups: cardinality and event-based measures. Cardinality measures are used to check whether the number of tracked objects corresponds with the true number of objects within the scene. In this way, it does not assess the maintenance over time of the identification of each object. For example, if the numbers of wrong appearances and disappearances were equal at a particular time instant, the tracker would misbehave without detection by cardinality measures.

Event-based measures allow checking for the consistency of the identification of tracked objects over time. Thus, the continuity of a set of predefined events is annotated. Which events are chosen depends on the application scenario, but they should be generic enough to be easily obtained. An event consists of a label e and the time instant t at which the event has been observed. These events will be the ground-truth data used to compute performance measures, and the basis for comparison with the tracking system results.

It is expected that labels for detected events will usually coincide over time, but not the exact time instant at which they will occur. The following events are considered: Enter, Exit, Occluded, In Group, and Reenter. A group is defined as a set of objects that appear in the image; it may be just one object (see Fig. 6). All events and their causes are described in Table 3.

Fig. 6

In Group objects.

087201_1_6.jpg

Table 3

Considered events and their causes.

EventCause
EnterThe object appears in the image
ExitThe object disappears from the image
OccludedThe scene occludes the object
In GroupAnother object occludes the object
ReenterEnd of occlusion or rejoining of an object that was tracked singly

To report these events, the iTrack algorithm does not require a new definition; it only uses the prior density and the likelihood values as follows:

  • Enter: The prior density detects the object’s first occurrence, and it is maintained during k consecutive frames to properly initialize the object’s appearance model and its velocity.

  • Exit: The position components of a sample are propagated out of the image limits, and its weight is set to 0. Therefore, this sample does not appear into the posterior density.

  • Occluded: An occlusion occurs; then the likelihood value decreases significantly, and that can be detected by the system.

  • In Group: The numerical behavior is identical to that in the Occluded event. The difference is that it is possible to maintain the object’s location by tracking the object that occludes it.

  • Reenter: For short occlusions it is possible to easily identify this event, because several samples survive the occlusion. For large occlusions, it is necessary to use the appearance model to recognize the object again and to distinguish this event from Enter.

To complete the description, we add another event: Tracked. Usually, this event is not included in the event table. Alternatively, it is assumed that the Tracked event starts after the Enter or Reenter events and ends when a new event from Table 3 occurs. Finally, to prevent model degeneracy, the appearance model is not updated during the Occluded and In Group events.

Next, we define two event-based measures that can be used for the evaluation of visual tracking algorithms in complex image sequences, where many agents can appear. The main idea is to build up a table of events versus time, which is compared with the table of results obtained by the visual tracking algorithm. The first measure, Cn , is based only on the coincidence of observed events. The second measure, Cl , is based on computing similarities in the reported object’s labels. Both measures reflect the percentage of images where a correct correspondence of events exists. Thus, the tracking can be properly evaluated because the trajectory of each detected object is embedded between events. According to the first measure, Cn , one object in a particular event has a valid correspondence when the vision system has also found the same event. In the second measure, Cl , a valid correspondence is found only when the label also coincides. In order to verify this second measure, the object is manually labeled Li at its initial detection, where the index i refers to the object’s label. As a result, in the cases that the tracker loses one object during several frames and recovers it afterwards, but fails to identify the recovered object, its corresponding label will be changed and the Cl measurement will be wrong.

Both measures have been used to compare the iTrack algorithm with the W4 algorithm.5 In W4 , Collins also use a scene model and an estimation procedure for object tracking. However, W4 has a previous process of data association in order to compute the correspondence between each object and its estimation filter. Both the annotated and computed events for the test sequence of the PETS database are shown in Table 4. Also, the system’s results for several frames of this sequence are shown in Fig. 7.

Fig. 7

Visual tracking results of the iTrack algorithm for the test sequence.

087201_1_7.jpg

Table 4

Annotated and algorithm-computed events for the test sequence.

t Event W4 iTrack
105Enter, O1 Enter, L1=O1 Enter, L1=O1
115Enter, O2 Enter, L2=O2 Enter, L2=O2
188Enter, O3 Enter, L3=O3 Enter, L3=O3
208Exit, L2
239Exit, L2
244 L3=O2
244Enter, L4=O3
279Exit, L1
296Enter, L5=O1
322Enter, O4 Enter, L6=O4 Enter, L4=O4
410Exit, O1 Exit, L5 Exit, L1
416In Group, ( L3 , L4 )
445Exit, O2 Exit, ( L3 , L4 )
450Exit, O3 Exit, L3

Interpretation can be carried out by applying the defined measures, Cn and Cl , to the computed events of Table 4. Next, we detail the procedure for the first object that appears in the scene, object O1 . The W4 algorithm found a correct correspondence for both measures from t=105 to t=279 , i.e., until the algorithm loses the object. This object appears again at t=296 , and it is tracked until t=410 with a different label, L5 . Therefore, Cn=174+114=288 and Cl=174 for the W4 algorithm. On the other hand, the iTrack algorithm does not lose this object; therefore Cn=305 and Cl=305 . The results for all objects within the scene are shown in Table 5.

Table 5

Performance measurements for the events of Table 4.

ObjectNumberof frames W4 iTrack
Cn Cl Cn Cl
O1 305288174305305
O2 330265093124124
O3 277228056277277
O4 178178178178178
Total1090959501884884
100%88%46%81%81%

It should be noted that the image sequence is noisy and complex due to the similarity in appearance of both the agents and the background. It can be seen in Table 5 that W4 is an algorithm that relies on the detection task rather than tracking. As a result, it obtains good measurements for Cn , but fails to identify objects correctly and therefore gives poor results in the Cl measurements. Errors reflected in Cl are mainly due to shadows [see Fig. 8a] and to the proximity between two objects in the image due to the scene perspective [see Fig. 8b].

Fig. 8

Wrong detections. Left: due to shadows. Right: due to the scene perspective.

087201_1_8.jpg

Best results on Cl are obtained using the iTrack algorithm, since it is more robust because of the use of particle filters and adaptive appearance models. The main drawback of our approach is that the temporal continuity of the prior density can cause a too noisy object not to reappear. For example, this fact occurs for object 2 in the test sequence. Another important problem that remains open is that if the objects’ appearance model is not correctly initialized, or it suddenly changes, the object can be lost. Anyway, the results in Table 5 show that in real environments, and for normal conditions, the algorithm’s performance is good enough.

5.

Discussion and Conclusion

From the results obtained in Sec. 4, it can be stated that wrong detections cause an important decrease in the performance of the tracking task. It is important to provide accurate results in the detection task to ensure the correct performance of the overall video surveillance system. Indeed, another possibility is the integration of detection and tracking. For example, in Ref. 22 is presented an online selection of discriminative features to improve the tracking task by maximizing the contrast between the object and its surroundings. However, multiple-object tracking is not considered. Other approaches propagate detections over time23 or use detection algorithms on the likelihood function.24 This fact can be easily included when using particle filters by introducing new particles from a prior density based on the detection results at each time instant in a similar fashion to our algorithm iTrack.

In this regard, we consider that the most important contribution of our algorithm is the use of an adaptive appearance model that is automatically updated to improve the tracking. Thus, the algorithm can be used in different application environments without significant changes. Nevertheless, if more visual cues are considered, the algorithm can be made more robust.25 For example, color is the most currently used cue in image-based tracking algorithms.26

Another contribution of this work is the definition of two performance measures for the evaluation of a video surveillance system. These measures are computed automatically by the system, and they only require an easy manual annotation to describe the real events of the sequence. We have used these measures to compute the performance of our tracking and object detection algorithms in a complex outdoor scene. Our future work will include the evaluation of our tracking algorithm for indoor scenes using the CAVIAR database.27 But we consider that our system’s evaluation is already sufficient for the paper’s main objective, that is, to emphasize the importance of detection in video surveillance applications.

Acknowledgments

This work is supported by EC grants IST-027110 for the HERMES project and IST-045547 for the VIDI video project, and by the Spanish MEC under projects TIN2006-14606, TIN2007-67896, and CONSOLIDER-INGENIO 2010 (CSD2007-00018). Jordi Gonzàlez and Javier Varona also acknowledge the support of a Juan de la Cierva and a Ramon y Cajal (cofunded by the European Social Fund) Postdoctoral Fellowship from the Spanish MEC, respectively.

References

1. 

A. Lipton, H. Fujiyoshi, and R. Patil, “Moving target classification and tracking from real-time video,” IEEE Workshop on Applications of Computer Vision (WACV’98), 8 –14 (1998) Google Scholar

2. 

T. Horprasert, D. Harwood, and L. Davis, 983 –988 (2000). Google Scholar

3. 

P. Viola, M. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance,” Int. J. Comput. Vis., 63 (2), 153 –161 (2005). https://doi.org/10.1007/s11263-005-6644-8 0920-5691 Google Scholar

4. 

R. Collins, A. Lipton, and T. Kanade, ANS 8th Int. Topical Mtg. on Robotics and Remote Systems, 1 –15 (1999) Google Scholar

5. 

I. Haritaoglu, D. Harwood, and L. Davis, IEEE Trans. Pattern Anal. Mach. Intell., 22 809 (2000). https://doi.org/10.1109/34.868683 0162-8828 Google Scholar

6. 

M. Isard and A. Blake, Int. J. Comput. Vis., 29 5 (1998). https://doi.org/10.1023/A:1008078328650 0920-5691 Google Scholar

7. 

Y.-C. Ho, IEEE Trans. Autom. Control, 9 333 (1964). 0018-9286 Google Scholar

8. 

M. Isard and J. MacCormick, (2001). Google Scholar

9. 

H. Sidenbladh, M. Black, and D. Fleet, (2000). Google Scholar

10. 

H. Tao, H. Sawhney, and R. Kumar, IEEE Trans. Pattern Anal. Mach. Intell., 24 75 (2002). 0162-8828 Google Scholar

11. 

H. Tao, H. Sawhney, and R. Kumar, (1999). Google Scholar

12. 

D. Rowe, I. Rius, J. Gonzàlez, and J. Villanueva, 384 –393 (2005). Google Scholar

13. 

C. Stauffer and W. Grimson, IEEE Trans. Pattern Anal. Mach. Intell., 22 747 (2000). https://doi.org/10.1109/34.868677 0162-8828 Google Scholar

14. 

Y. Ye, J. K. Tsotsos, E. Harley, and K. Bennet, Mach. Vision Appl., 12 32 (2000). 0932-8092 Google Scholar

15. 

T. Wada and T. Matsuyama, (1996). Google Scholar

16. 

M. Nicolescu, G. Medioni, and M.-S. Lee, 169 –174 (2000). Google Scholar

17. 

R. Horaud, D. Knossow, and M. Michaelis, Mach. Vision Appl., 16 331 (2006). 0932-8092 Google Scholar

18. 

Y. Ren, C.-S. Chua, and Y.-K. Ho, Mach. Vision Appl., 13 332 (2003). 0932-8092 Google Scholar

19. 

M. Isard and A. Blake, 893 –908 (1998). Google Scholar

20. 

, International Workshops of Performance Evaluation of Tracking and Surveillance (PETS), (2006) http://peipa.essex.ac.uk/ipa/pix/pets/ Google Scholar

21. 

S. Pingali and J. Segen, 33 –38 (1996). Google Scholar

22. 

R. Collins, Y. Liu, and M. Leordeanu, IEEE Trans. Pattern Anal. Mach. Intell., 27 1631 (2005). https://doi.org/10.1109/TPAMI.2005.205 0162-8828 Google Scholar

23. 

R. Verma, C. Schmid, and K. Mikolajczyk, IEEE Trans. Pattern Anal. Mach. Intell., 25 1215 (2003). 0162-8828 Google Scholar

24. 

M. Han, W. Xu, H. Tao, and Y. Gong, 874 –861 (2004). Google Scholar

25. 

P. Pérez, J. Vermaak, and A. Blake, Proc. IEEE, 92 495 (2004). https://doi.org/10.1109/JPROC.2003.823147 0018-9219 Google Scholar

26. 

K. Nummiaro, E. Koller-Meier, and L. Van-Gool, Image Vis. Comput., 21 99 (2003). https://doi.org/10.1016/S0262-8856(02)00129-4 0262-8856 Google Scholar

27. 

R. Fisher, 1 –5 (2004). Google Scholar

Biography

087201_1_m1.jpg Javier Varona is a Ramon y Cajal Research Fellow of the Spanish Government at the Universitat de les Illes Balears (UIB). He received his doctoral degree in Computer Engineering from the Universitat Autònoma de Barcelona (UAB) and the Computer Vision Center (CVC) in 2001. His PhD thesis was on robust visual tracking. His research interests include visual tracking and human motion analysis. Currently, he is the leader of a research project for building natural interfaces based on computer vision founded by the Spanish Government (TIN2007-67896). He is member of the ACM.

087201_1_m2.jpg Jordi Gonzàlez obtained his PhD degree from the UAB in 2004. At present he is a Juan de la Cierva postdoctoral researcher at the Institut de Robòtica i Informàtica Industrial (UPC-CSIC). The topic of his research is the cognitive evaluation of human behaviors in image sequences. The aim is to generate both linguistic descriptions and virtual environments that explain those observed behaviors. He has also participated as a WP leader in the European projects HERMES and VIDI-Video, and as a member in the euCognition network. He cofounded the Image Sequence Evaluation research group at the CVC in Barcelona.

087201_1_m3.jpg Ignasi Rius obtained his BSc(Hons) in Computer Science Engineering in 2003 from the Universitat Autònoma de Barcelona (UAB). In 2005 he received his MSc, and he is now taking his PhD studies at the Computer Vision Center (CVC) from UAB in Barcelona. His work is focused on the modeling and analysis of human motion for visual tracking and recognition applications. He is an active member of the Image Sequence Evaluation (ISE) research group at the CVC.

087201_1_m4.jpg Juan José Villanueva received a BSc in physics from the University of Barcelona in 1973 and a PhD degree in computer science from the UAB in 1981. Since 1990, he has been a full professor in the Computer Science Department at the UAB. He promoted the Computer Vision Center (CVC) and has been its director since its foundation. He was a cofounder of the Image Sequence Evaluation (ISE) research group at CVC. His research interests are focused on computer vision, in particular human sequence evaluation.

©(2008) Society of Photo-Optical Instrumentation Engineers (SPIE)
Javier Varona, Jordi Gonzalez, Ignasi Rius, and Juan Jose Villanueva "Importance of detection for video surveillance applications," Optical Engineering 47(8), 087201 (1 August 2008). https://doi.org/10.1117/1.2965548
Published: 1 August 2008
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Detection and tracking algorithms

Video surveillance

Cameras

Panoramic photography

Motion models

Optical tracking

Particle filters

RELATED CONTENT


Back to Top