The project's main goal was to create an analytical service platform for forecasting crime, which can strengthen the ability to prevent and combat crime based on verifiable forecasts and optimize the use of available Police forces and resources. As in forecasting criminal events over time, future events are associated with a sequence of historical ones by time series of observational irregularly spaced data and other exogenous variables affecting crime, especially various factors related to the entire environment: natural, social, economic, legal, and political, to which the forecast is to affect crime level and structure. The development of sufficient crime threat data-based prediction models may require an appropriate combination of criminal event history, determining the risk level, and geographic data characterizing the areas for which the threat is predicted. The article presents the data-based methodology for crime forecasting system and exemplary operating results. The final evaluation was done to verify the forecasts obtained based on actual data for selected categories of crime, considering the optimization of the use of forces and resources and identify proposals for changes in the criminal policy.
The article deals with the problem of computing the leverages for multi-output neural networks. The leverages can be subsequently used for determining which learning examples have the strongest influence on the model. Therefore computing them may provide researchers with important insights about the constructed model. More specifically they can be used to detect over fitting. A step by step algorithm for computing the Jacobian matrix and the leverages is presented, along with an example application to a synthetic problem.
KEYWORDS: Feature selection, Neural networks, Principal component analysis, Neurons, Process modeling, Visualization, Transform theory, Data compression, Matrices, Data processing
In this paper the idea of deep learning classifier is developed. The effectiveness of discriminative classifier, as e.g. multilayer perceptron, support vector machine can be improved by adding the data preprocessing blocks: orthogonal feature selection (Gram-Schmidt method) and nonlinear principal component analysis. We present the case study of various structures of deep learning systems (scenarios).
This paper presents a way to detect and track ball with using the OpenCV and Kinect. Object and people recognition, tracking are more and more popular topics nowadays. Described solution makes it possible to detect ball based on the range, which is set by the user and capture information about ball position in three dimensions. It can be store in the computer and use for example to display trajectory of the ball.
KEYWORDS: Neural networks, Data modeling, Computer programming, Neurons, Detection and tracking algorithms, Process modeling, Systems modeling, Data processing, Image processing, Data conversion
This article summarises the results of implementation of a Graph Neural Network classi er. The Graph Neural Network model is a connectionist model, capable of processing various types of structured data, including non- positional and cyclic graphs. In order to operate correctly, the GNN model must implement a transition function being a contraction map, which is assured by imposing a penalty on model weights. This article presents research results concerning the impact of the penalty parameter on the model training process and the practical decisions that were made during the GNN implementation process.
The paper presents a method of the training set selection in order to improve the generalization of neural networks. The data search is based on the estimation of their influence on the network expressed by the influential statistics. This active learning method is presented by using the didactic example. It can be concluded that the designed neural network for regression achieved high generalization score avoiding overfitting while trained on the minimal training set.
KEYWORDS: Image processing, 3D modeling, Image segmentation, Unmanned aerial vehicles, 3D image processing, Image enhancement, Digital filtering, Systems modeling, Photogrammetry, Imaging systems
Unmanned Aerial Systems (UAS) are extensively used in diverse fields, wherever inexpensive and easy-to-deploy platforms are required for close-range remote sensing.
Applications proposed in archaeology to date include ortho-photography and 3-D modeling. On the other hand, use of image processing and feature detection methods, well developed in other fields is hardly used.
After reviewing technologies and methods for UAS-based surveying and surface modeling, we propose feature detection methods (e.g. line detection, texture segmentation) dedicated to extraction of structures in the images that are significant for archaeological survey, planning, and documentation and show results on selected case studies.
The paper presents the method of tuning the support vector machine hyperparameters by minimizing an
estimate of the leave-one-out error known as radius/margin bound. The quality of the method, in terms of classification
accuracy and generalization rate was tested against an exhaustive grid-search in hyperparameter space using a 2-
dimensional Banana dataset.
The paper describes solution of feature selection from amino acid sequences in phosphorylation prediction problem. We
show that even for short sequences the variable selection leads to better classification performance. Moreover, the final
simplicity of models allows for better data understanding and can be used by an expert for further analysis. The feature
selection process is divided into two parts: i) the classification tree is used for finding the most relevant positions in
amino acid sequences, ii) then the contrast pattern kernel is applied for pattern selection. This work summarizes the
research made on classification of short amino acid sequences. The results of the research allowed us to propose a
general scheme of amino acid sequence analysis.
In this paper the new simulator of unmanned aerial vehicle is described. The application is capable of generating
3D environment based on graphical map input, and the vehicle motion is controlled by the Vector Field
Histogram method, using radar heads as perception data source.
The paper presents a method of creating an environment model in collision avoidance system for unmanned aerial
vehicles (UAV). The environment model is generated by the procedures processing the data from on-board equipment
and digital maps. The main sensor that provides information about the current situation around the UAV is a radar
obstacle detector. Each detected object is defined by such parameters as distance, speed and the number of radial zone.
The method is based on the idea of the certainty grid introduced in vector field histogram method which is used as a
probabilistic representation of the obstacles. The tests of developed algorithm were performed in simulated environment.
This work, after a preliminary feasibility study using a Matlab environment simulation, defines the design and the real
hardware testing of a new bio-inspired decision chain for UAV sense-and-avoid applications. Relying on a single and
cheap visible camera sensor, computer vision, bio-inspired and automatic decision algorithms have been adopted and
implemented on a specific ARM embedded platform through C++/OpenCV coding. A first data set processing, really
captured on flight, has been presented.
A system for simultaneous multi-obstacle recognition and tracking is proposed. Based on the novel Hierarchical
Temporal Memory algorithm, it is design for application in vision problems but generally not constrained to it. Thanks to
its modular and mostly parallel architecture it can be easily implemented in distributed environment attaining significant
computation speed and thus it is suited for real-time processing tasks like visual data processing in mobile robotics.
Derived from standard neural network paradigm the system can extract information concerning position, relative speed
and type of an obstacle in a dynamically changing environment. It can be easily enhanced for basic prediction tasks.
In this work we present a contrast pattern kernel for strings. In this kernel, the feature extraction is based on
contrast patterns, that are substrings (patterns) common for the positive class but rare for the negative one.
The presented idea can be also illustrated as the extension of a spectrum kernel, investigated in previous work
[11]. This extension consists on incorporating feature selection based on contrast patterns to the spectrum kernel.
Therefore, presentation of contrast pattern kernel was referenced to the spectrum kernel and the results of tests
done were compared with those obtained for the spectrum kernel.
Kernels were tested on the data sets obtained from the Phospho.ELM database. These data incoroporate amino
acids sequences to which specified enzymes (protein kinases) were assigned. The enzymes catalyze reaction of
phosphorylation for these sequences.
In this paper, starting from the GOFR algorithm, a new Forward Regression algorithm for landmine detection and
localization using thermal methods is presented. The efficiency of such algorithm is described by showing a valid
representation of the typical temperature waveforms taken after heating the ground surface, and detection of
temperature anomalies due to the presence of hidden objects. Optimizations to the algorithm are then showed, with the
aim of a significant sampling density reduction in space and time.
In this work we present string kernels as a method of representing biological data. These data are sequences of symbols
which represent amino acids encoding proteins.
This paper presents the results of the tests done on the data obtained from the Centre of Oncology in Warsaw. These data
incoroporate amino acids sequences to which specified enzymes (protein kinases) were assigned. The enzymes catalyze
reaction of phosphorylation for these sequences.
The results of the tests were compared with those obtained during previous investigations made at the Faculty of
Electronics and Information Technology, Warsaw University of Technology.
The paper presents a new approach to automatic constructing and training Radial Basis Function (RBF) neural networks
based on Differential Evolution (DE) algorithm. The method, called Differential Evolution-Radial Basis Function
Network (DE-RBFN) is tested on approximation tasks of exemplary one- and two- dimensional Gaussian functions.
Experiments are performed in Matlab environment. The results show that application of DE-RBFN enables to obtain
optimal sparse network architecture by tuning the position and width of each basis function. The performance of the method is better than other related procedures applied to RBF networks.
This paper presents new techniques of landmine detection and localization using thermal methods. Described methods
use both dynamical and static analysis. The work is based on datasets obtained from the Humanitarian Demining
Laboratory of Università La Sapienza di Roma, Italy.
The paper includes description of a novel method of the structural optimization of least squares support vector
classifier. The virtual leave-one-out residuals are applied as the criterion for selection of the most influential data. The
analytic form of the solution enables to obtain a high gain of the computational cost. The presented method eliminates
the drawback of the LS-SVM classifiers - lack of sparseness in the solution. The quality of the method was tested on the
artificial data sets - two moons problem and Ripley data set.
This paper presents the application of differential evolution, an evolutionary algorithm of solving a single objective
optimization problem - tuning the hiperparameters of least-square support vector machine classifier. The goal was to
improve the classification of patients with sustained ventricular tachycardia after myocardial infarction based on a
signal-averaged electrocardiography dataset received from the Medical University of Warsaw. The applied method
attained a classification rate of 96% of the SVT+ group.
A crucial problem in machine learning is finding the representative set of data for building a model for both
classification and approximation task. In this paper we present the orthogonal least squares method for feature selection.
The presented method was used for finding the most important features for selecting patients with sustained ventricular tachycardia after myocardial infarction (SVT+). We show that with the reduced set of descriptors used in the classification process we obtain the results that are better than those obtained with the full set.
This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the
common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the
measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity
to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and
water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned
Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.
This paper presents the method of risk recognition of sustained ventricular tachycardia and flicker in patients after
myocardial infarction based on high-resolution and signal-averaged electrocardiography. Described semisupervised method is combination of k-means clustering and support vector machine classifier. The work is based on dataset obtained from the Medical University of Warsaw. While learning process there were used only 5% examples labels. Evolutionary optimization of coefficients for each signal parameter was executed. It let show the most important parameters. The method of classification had high rate of successful recognition about 94.9%.
Main goal of presented work was the construction of neural network for detection of deep defect centers in semiinsulating
materials. The element of novelty is the implementation of local cluster function combined with the leave-one-out method, used to determine the appropriate structure of neural net.
This paper presents the application of neural networks to the risk recognition of sustained ventricular tachycardia and
flicker in patients after myocardial infarction based on high-resolution electrocardiography. This work is based on
dataset obtained from the Medical University of Warsaw. The studies were performed on one multiclass classifier and
on binary classifiers. For each case the optimal number of hidden neurons was found. The effect of data preparation:
normalization and the proper selection of parameters was considered, as well as the influence of applied filters.
The best neural classifier contains 5 hidden neurons, the input ECG signal is represented by 8 parameters. The neural
network classifier had high rate of successful recognitions up to 90% performed on the test data set.
The least-squares support vector machines (LS-SVM) can be obtained by solving a simpler optimization problem than
that in standard support vector machines (SVM). Its shortcoming is the loss of sparseness and this usually results in slow
testing speed. Several pruning methods have been proposed to improve the sparseness of a LS-SVM trained on the whole
training dataset. A selection of significative samples is proposed to train a LS-SVM on a reduced dataset. A dataset about
electrocardiogram (ECG) of 376 patients has been used to assess the proposed algorithm.
In this paper the application of on-line support vector machine to spectral surface approximation is presented. The
experimental data were obtained by the photocurrent decay measurement as function of time and temperature for a
sample of neutron irradiated silicon. This approach enabled to extract the deep level center defect parameters: activation
energy and pre-exponential factor.
KEYWORDS: Source mask optimization, Spectroscopy, Gallium arsenide, Temperature metrology, Data analysis, Optimization (mathematics), Telecommunications, Data modeling, Associative arrays, Transform theory
We propose new approach for defect centers parameters extraction in semi-insulating GaAs. The experimental data is
obtained by high-resolution photoinduced transient spectroscopy (HR-PITS). Two algorithms have been introduced:
support vector machine - sequential minimal optimization (SVM-SMO) and relevance vector machine (RVM). Those
methods perform the approximation of the Laplace surface. The advantages of proposed methods are: good accuracy of
approximation, low complexity, excellent generalization. We developed SVM-RVM-PITS system, which enables
graphical representation of Laplace surface, defining local area for defect parameter extraction, choosing the SVM or
RVM method for approximation, calculation of the Arrhenius line factors and finally the parameters of the defect
centers.
This paper prsents a method of detection of deep defect centres in semi-insulating materials, with usage of neural net
application. Innovation of this work is based on implementation of local cluster activation function in standard scheme
of neural network.
KEYWORDS: Intelligence systems, Control systems, Laser systems engineering, Mobile robots, Fuzzy logic, Robotic systems, Machine learning, Decision support systems, Robotics, Fuzzy systems
In this paper the new intelligent data processing system for mobile robot is described. The robot perception uses the LSM - Laser System Measurement. The innovative fast hybrid decision system is based on fuzzy ARTMAP supported by decision tree. The virtual laboratory of robotics was implemented to execute experiments.
For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.
The purpose of this paper is to present the Least Squares Support Vector Machine (LS-SVM) applied to investigation of deep level defects in semi-insulating gallium arsenide (SI GaAs). LS-SVM was used for spectral surface approximation, computed as a result of Photo Induced Transient Spectroscopy (HRPITS). Deep defects level parameters were extracted based on the spectral surface approximation and Arrhenius equation. Diverse LS-SVM modification was implemented to achieve good quality of estimation.
Non parametric inference error, the error arising from estimating the regression function based on a labeled set of
training examples could be divided into two main contributions: the bias and the variance. Neural network is one of
the existing models in non parametric inference whose bias/variance trade off is hidden below the network architecture.
In recent years new and powerful tools for neural networks selection were invented to impact the bias variance dilemma and the results in the implemented solution were satisfying [11,12]. We exploited the new measures introduced in these works for implementing a genetic algorithm to train neural networks. This method enables a reliable generalization error estimation for neural model. Estimating the error performance permits to drive correctly the genetic evolution that will lead to a fitting model with the desired characteristics. After a brief description of the estimation technique we used the genetic algorithm implementation for artificial data as a test. Finally the results of the fully automatic algorithm for NN training and model selection applied to investigation of defect structure of semi-insulating materials based on photo-induced transient spectroscopy experiments.
KEYWORDS: Data modeling, Neural networks, Genetic algorithms, Optimization (mathematics), Genetics, Data processing, Process modeling, Neurons, Data conversion, Systems modeling
Selection of neural networks for function approximation are well known and widely described in many recent papers.
This study extends the understanding of the problem on different areas of optimization. Typically selection of best
model reduces to searching for models that best fit to leave-one-out criteria. This work joins leave-one-out criteria with
genetic algorithms optimization methods and implements it with respect to Pareto optimum. Algorithm constructed in
this study basis on presented methods and was applied in semi-insulating materials approximation problem as well as
synthetic data models selection.
Support vector machines with Gaussian kernel are used in classification tasks with linear non-separable data. The
Gaussian kernel is parametrized by two values (hyperparameters): C,γ. Hyperparameters selection, also known as model selection, affects the generalization performance of classifier. Retaining high generalization performance is vital to achieving good prediction results on unknown datasets. There is no strict rule for proper model selection. The range of hyperparameters' values is wide, so this is a time consuming task in general. In our approach genetic algorithm is
exploited to find optimal hyperparameters values.
This paper presents a hardware based approach to accelerating sequential minimal optimization algorithm for training support vector machines. As an example of hardware that the algorithm can be accelerated on programmable graphics processors and processors supporting SIMD extensions have been chosen. Such hardware is becoming more popular in a consumer level PCs with each passing year and therefore solution proposed in this paper can benefit from present hardware that has not been utilized up until now.
In this paper the construction of low-cost mobile robot set up of LEGO bricks is presented. The robot vision software is based on cellular neural network image processing. The robot is able to find out a target - shining light - in the dedicated environment starting from any initial position. The most severe limitation of the robot functionality is its speed.
KEYWORDS: Semiconductors, Temperature metrology, Data centers, Data modeling, Optoelectronics, Spectroscopy, Silicon, Digital recording, Error analysis, Data analysis
New algorithms have been introduced for extraction of defect centers parameters in semiconductors from experimental data obtained by photoinduced transient spectroscopy. The photocurrent decays are measured as function of time and temperature. The defect centers act as traps of charge carriers. Hence, each trap creates a specific fold on a correlation surface and on the Laplace surface and the ridges of folds correspond to the Arrhenius law. The quality of the data analysis depend mainly on the applied approximation methods. It is shown that the modern methods based on margin maximization and on regularization give excellent results. The analyzed are the following approximation methods: support vector machine, sparse least square support vector machine. The important advantages of these models are as follows: good accuracy of approximation, analytic representation of considered surfaces, low complexity and finally excellent generalization. Hence, they enable to obtain more exact values of investigated defects and better discrimination of observed defects
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.