In the context of cognitive vehicles, it is essential to have full awareness of the vehicle location to have a better insight of the operational environment and enhance the driver perception. Although the global navigation satellite system (GNSS) is commonly a practical solution for vehicle localization, it may suffer quality deterioration, accuracy decay and even track loss due to signal blockage and reflection off buildings and big structures. These limitations usually manifest in big cities with dense traffic and active roads which means losing the sense of location, even for a short time, might blur the cognitive system decision making process and jeopardize the safe driving of the vehicle. Consequently, cognitive vehicles should not count only on the GNSS solution for vehicle localization. In this work we establish that the cognitive vehicle location awareness can be achieved through the inner process of interaction with the surrounding environment and observing its static reference elements. This approach is inspired by the way the human brain can assess its position in a known environment by recognizing some landmarks and referential objects. Our proposed solution allows the cognitive vehicle to ascertain its location by interacting with its surroundings. we train a deep neural network to detect some objects of reference, create a prior knowledge of the vehicle environment and estimate the vehicle location by recognizing the objects detection pattern. Finally, the proposed solution will be endorsed by promising results from a real-world scenario, and further work will be proposed to improve the solution.
KEYWORDS: Fuzzy logic, Probability theory, Electronic support measures, Data fusion, Data modeling, Information fusion, Remote sensing, Radar, Databases, Visualization
In several practical applications of data fusion and more precisely in object identification problems, we need to combine imperfect information coming from different sources (sensors, humans, etc.), the resulting uncertainty being naturally of different kinds. In particular, one information could naturally been expressed by a membership function while the other could best be represented by a belief function. Usually, information modeled in the fuzzy sets formalism (by a membership function) concerns attributes like speed, length, or Radar Cross Section whose domains of definition are continuous. However, the object identification problem refers to a discrete and finite framework (the number of objects in the data base is finite and known). This implies thus a natural but unavoidable change of domain. To be able to respect the intrinsic characteristic of uncertainty arising from the different sources and fuse it in order to identify an object among a list of possible ones in the data base, we need (1) to use a unified framework where both fuzzy sets and belief functions can be expressed, (2) to respect the natural discretization of the membership function through the change of domain (from attribute domain to frame of discernment). In this paper, we propose to represent both fuzzy sets and belief function by random sets. While the link between belief functions and random sets is direct, transforming fuzzy sets into random sets involves the use of α-cuts for the construction of the focal elements. This transformation usually generates a large number of focal elements often unmanageable in a fusion process. We propose a way to reduce the number of focal elements based on some parameters like the desired number of focal elements, the acceptable distance from the approximated random set to the original discrete one, or the acceptable loss of information.
KEYWORDS: Probability theory, Fuzzy logic, Data fusion, Data modeling, Sensors, Information fusion, Data processing, Defense and security, Information theory, Associative arrays
For several years, researchers have explored the unification of the theories enabling the fusion of imperfect data and have finally considered two frameworks: the theory random sets and the conditional events algebra. Traditionally, the information is modeled and fused in one of the known theories: bayesian, fuzzy sets, possibilistic, evidential, or rough sets... Previous work has shown what kind of imperfect data these theories can best deal with. So, depending on the quality of the available information (uncertain, vague, imprecise, ...), one particular theory seems to be the preferred choice for fusion. However, in a typical application, the variety of sources provide different kinds of imperfect data. The classical approach is then to model and fuse the incoming data in a single theory being previously chosen. In this paper, we first introduce the various kinds of imperfect data and then the theories that can be used to cope with the imperfection. We also present the existing relationships between them and detail the most important properties for each theory. We finally propose the random sets theory as a possible framework for unification, and thus show how the individual theories can fit in this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.