PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215601 (2021) https://doi.org/10.1117/12.2627174
This PDF file contains the front matter associated with SPIE Proceedings Volume 12156, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence Algorithms and Deep Learning Applications
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215602 (2021) https://doi.org/10.1117/12.2626438
Solving large-scale sparse linear systems is a critical problem in scientific and engineering computing. Partial differential equations can solve problems in many fields. They can be transformed into large-scale linear systems with a series of methods, and the parallel solution of tridiagonal linear systems is one of them. The solution of linear systems is very time-consuming in most of the problems, accounting for more than half of the total time. Load balancing can reduce process time for waiting and improves computational efficiency, and it is the focus of many algorithms. The article is based on Stone's proposed recursive doubling algorithm, an improved algorithm for solving tridiagonal linear systems using the full-recursive-doubling communication model and the Möbiu transform. The improved algorithm can calculate the million-dimensional linear systems. Numerical experiments show that compared with ordinary parallel algorithms, the improved algorithm shows up to 2x improvement than the original version, and some results even show up to 3x. In addition, the load-balancing performance has been greatly improved, and the time difference of the processes is 1/7 of the original version. The improved algorithm has a good load balancing, and the running time of each process is not much different, avoiding process waiting and resource wastage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215603 (2021) https://doi.org/10.1117/12.2626432
According to the rapid development of the technological age and the rapid advancement of databases, it will be very difficult to find out the information they need in the social application and also consume a lot of time. Therefore, human-created Recommendation System. Large and small video firms, such as Douyin, Instagram, and Facebook, employ the recommendation system to allow everyone to browse their favorite material. However, as time passes and more people's needs change, large data makes it harder to convey really useful information to users, and major issues such as information overload have arisen regularly. There are currently only two solutions. The first is search engines, which are a highly prevalent information retrieval method, and the second is recommendation systems, which are the most effective information filtering technology. In the background of short video software, the Collaborative Filtering Algorithm is the most used recommendation system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215604 (2021) https://doi.org/10.1117/12.2626424
Trajectory Tracking is a hot topic in robot manipulators control. The robot manipulator is a classical nonlinear and multibody system. Control of such a system is difficult to acquire the pleasant result. This paper explores an efficient control for a 2-DOF robot manipulator using modified sliding mode control to overcome the nonlinearity and time-varying parameters. The novel control scheme integrates the conventional sliding mode control with Zeroing Neural Network to accelerate the convergence time and suppress the chattering. The simulation experiments are conducted to show the superiority and validity of the proposed method. The results demonstrate the proposed method presents the higher accuracy than classical sliding mode control with chattering suppress as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215605 (2021) https://doi.org/10.1117/12.2626444
The prosperity of the basketball industry is closely related to technology. At the same time, artificial intelligence (AI) technologies such as machine learning, deep neural networks and reinforcement learning have played an important role in basketball. Therefore, the influencing factors and practical application of AI in basketball, a popular sports industry, are worth discussing. This article mainly summarizes the application methods of AI technology in several key issues in the basketball field. This article mainly summarizes the three important research directions of AI in basketball: how to improve the performance and attractiveness of basketball games in basketball training, the improvement of players' competitive level, and the observation of athletes' competitiveness. This article discusses the application of AI in three key areas of basketball. After discussion, the author concluded that AI technology can play a vital role in basketball.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215606 (2021) https://doi.org/10.1117/12.2626479
In the context of scientific epidemic prevention, garbage classification has attracted more and more attention, especially in hospitals, shopping malls and other public activity places with large flow and intensive. However, in the process of garbage recycling, it is easy to bring the spread of various viruses. At present, in China, we still judges whether the garbage can is full through human operation. For the transportation and recycling of full medical garbage cans, it is still operated by dedicated operation. In order to realize the garbage location detection and automatic garbage recovery monitoring of medical garbage cans, the data collected by various sensors are processed by the lower computer, and the recycling robot is monitored by the upper computer of the MQTT and cloud platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215607 (2021) https://doi.org/10.1117/12.2626457
Traditional digital cameras use a Bayer array and image interpolation approach to restore a full-colour image. But there is only one-color sampling on each pixel, and the restoration of other colours on each pixel needs to be achieved through the CFA interpolation algorithms. Most algorithms have some distortions, including Zipper Effect, moiré, false color, etc. For solving the problems, we proposed a novel algorithm for color image interpolation. This algorithm is based on the Hibbard algorithm and is leveraged for edge judgement. Furthermore, the algorithm involves two successive steps. The first step is to determine the direction of the edge by judging the difference in each direction. In contrast, the second step uses subsequent processing methods to improve the recovery effect of pictures. Experimental results show that compared with the Hibbard algorithm, the algorithm we proposed in this paper performs better under multiple indicators on image processing tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215608 (2021) https://doi.org/10.1117/12.2626456
Small and medium-sized enterprises(SMEs) play important roles in our economy. However, the scale of SMEs is relatively small. There is also a lack of pledged assets, which results in a problem of difficulty in borrowing funds. Banks face the problem that how can they determine whether to lend or not when evaluating SMEs credit risk factors, including the strength and reputation, as well as credit strategies, such as loan quota, interest rate and term. In this paper, support vector machine and decision tree method are comprehensively used to learn the enterprises data and evaluate the credit rating of enterprises lacking credit information. A linear optimization model is established based on the bank's principle of maximizing the expected annual profit, and this paper provides the optimal strategy for banks to decide the amount of loans granted to each enterprise. In addition, emergency situation is taken as an example, such as Covid-19 epidemic, by utilizing machine learning method and optimization theory, based on the fact that banks expect to maximize profit, establish an optimization model with wider applicability. And this paper provides credit strategy for banks when facing unexpected environmental emergency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215609 (2021) https://doi.org/10.1117/12.2626453
We propose a deep reinforcement learning framework for dynamic game domains. We start from analyzing the difficulty of existing algorithms in the multi-agent dynamic game case: multiple agents will affect each other by the dynamic update process, resulting in the lack of a stable learning environment. We propose a distributed learning framework, in which a global network controls parallel actor-critic modules to adjust the learning and update process, thereby improve the environment stability. Additionally, we introduce a method to adaptive adjust the learning-rate, which also enhance the stability of the learning process. Furthermore, we show the strength of our approach compared to existing methods in a multi-agent dynamic game scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560A (2021) https://doi.org/10.1117/12.2626423
Based on computer science, mathematics, and statistics, quantitative trading models, especially the artificial intelligence-based models, are widely used in the financial field. These models show their profitability in securities markets. Many researchers use supervised learning methods to predict the price trend and generate long (short) signals. However, the accuracy that supervised learning methods focus on is at odds with the main purpose of the quantitative models to achieve excess returns. To solve this problem, we construct a reinforcement learning-based approach and introduce the Markov decision process to facilitate the market’s operating mechanism. Moreover, we calculate more than 100 high-frequency factors to enhance the perception ability of the model. To shorten the exploration process in the training stage, we design a teacher-student framework by utilizing the prior experiences and then use both teacher-environment and agent-environment interacting samples to calculate the temporal difference error (TD-error). This error is used to optimize the model. To measure the practicality of the proposed model, we use the back-test method and build up simulated CSI 300 and CSI 500 markets. Three commonly used technical analysis-based methods and two reinforcement learning-based methods are compared with the proposed one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560B (2021) https://doi.org/10.1117/12.2626462
Generating high-quality, dense reconstruction of target scenes from monocular images is an essential basis for augmented reality and robotics. However, its apparent shortcomings (such as scale ambiguity) make it challenging to apply monocular 3D reconstruction to the real world. We propose a new monocular inertial dense SLAM method to solve traditional monocular dense SLAM's limitations by combining deep learning and multi-view geometry characteristics. We use the auto-masking static pixels strategy to shield the relatively static pixels in the adjacent image sequence and improve depth estimation accuracy. In addition, this paper combines geometric constraint, photometric constraint, and IMU constraint to propose a globally consistent pose estimation method, which improves camera positioning accuracy and optimizes dense reconstruction quality. The experimental results show that our method has achieved satisfactory results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560C (2021) https://doi.org/10.1117/12.2626422
With the development of national military strength and unmanned technology, intelligent ammunition coordinated operations have received extensive attention from scholars at home and abroad. Based on this background, this paper analyzes the process of the coordinated attack by smart ammunition. According to the cooperative combat process, the intelligent ammunition attack task allocation model was established. And the intelligent optimization algorithm (Dragonfly Algorithm) is used to complete the solution of the model. By introducing the Tent initialization strategy, the Tent-Dragonfly Algorithm (Tent-DA) is proposed. Tent-DA algorithm is used to complete the solution of the scheme, and the simulation verification is carried out. The simulation results show that the proposed algorithm can accurately and quickly solve the allocation results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560D (2021) https://doi.org/10.1117/12.2626489
The payload in the network traffic contains a variety of information related to the traffic. Identifying anomalous attack behaviors through the payload is a crucial method to protect against network attacks effectively. The payload structure is complex, which contains a large number of contents related to the security field, and these contents have contextual semantics strong relevance. To fully express the relevance of payload contents and better improve the quality of payload feature extraction, this paper proposes a feature extraction algorithm for payload based on tree structure representation, called TSR. The experimental results show that, compared with the existing feature extraction algorithms, the ROC-AUC of TSR increases by 3.32% on average, and the PR-AUC increases by 24.15% on average.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560E (2021) https://doi.org/10.1117/12.2626445
To solve of the traditional collaborative filtering recommendation algorithm has some problems, such as sparse data and difficult cold start, we proposed a collaborative filtering (TCF) recommendation algorithm based on trust relation and item preference. The algorithm performs triple processing on user ratings. First it introduces a correction factor to optimize the traditional similarity calculation. Then it uses user similarity to mine potential trust relationships between users. And taking into account the complex real-world relationship between users, using distrust information to filter users and get new ratings. Finally, a new scoring matrix is constructed on the basis of this score, and the improved Tanimoto coefficient is used to calculate the similarity, and the recommendation results are obtained by integrating user trust and item preferences. Experimental results show that the algorithm can effectively improve the quality of recommendations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560F (2021) https://doi.org/10.1117/12.2626430
Deep learning has achieved great success in computer vision, natural language processing, recommendation systems and other fields. However, the models of deep neural network (DNN) are very complex, which often contain millions of parameters and tens or even hundreds of layers. Optimizing weights of DNNs is easy to fall into local optima, and hard to achieve better performance. Thus, how to choose an effective optimizer which is able to obtain network with higher precision and stronger generalization ability is of great significance. In this article, we make a review of some popular historical and state-of-the-art optimizers, and conclude them into three main streams: first order optimizers that accelerate convergence speed of stochastic gradient descent or/and adaptively adjust learning rates; second order optimizers that can make use of second-order information of loss landscape which helps escape from local optima; proxy optimizers that are able to deal with non-differentiable loss functions through combining with the proxy algorithm. We also summarize the first and second order moment used in different optimizers. Moreover, we provide an insightful comparison on some optimizers through image classification. The results show that first order optimizers like AdaMod and Ranger not only have low computational cost, but also show great convergence speed. Meanwhile, the optimizers that can introduce curvature information such as Adabelief and Apollo, have a better generalization especially when optimizing complex network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560G (2021) https://doi.org/10.1117/12.2626538
In the process of keyword extraction, news text has its uniqueness. Keywords extraction of news text not only needs to pay attention to the difference of quantitative indexes of words, but also needs to consider the influence of phrases. In order to improve the keyword extraction effect of news texts, this paper constructs a keyword graph based on TextRank, improves the probability transition matrix by combining four quantitative indicators of node frequency, location, span and part of speech, realizing the weight difference of words. Considering the influence of word segmentation technology on phrases extraction, the reconstruction of phrases is completed according to the law of recombination and the concept of combinatorial entropy is defined to realize the filtering of reconstructed phrases. According to the statistical quantitative index of phrases, the linear weighted value is assigned to the reconstructed phrases, and finally, the TopN words or phrases are selected as keywords according to their weight value. Experimental results show that the proposed algorithm is not only superior to the traditional TextRank and TF-IDF algorithms, but also has great advantages compared with the improved PositionRank and MyWPMWRank algorithms, the F value of which can be increased by 9.75% at most, which effectively improves the keywords extraction effect of news text.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mengxuan Zhang, Xiangbing Kong, Junjie Chen, Hong Zang, Kai Guo, Yinan Wang
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560H (2021) https://doi.org/10.1117/12.2626443
VHR imagery change detection is one of research hotspots and difficulties in the field of remote sensing. However, the traditional remote sensing image change detection method is a waste of time and energy and low efficiency. In recent years, deep learning approaches in remote sensing image change detection verified feasible and save time to improve efficiency. A UNet change detection method based on aggregation residuals and attention mechanism is proposed, using prior knowledge of deep learning. The UNet model is used as the basic model, and the aggregation residual module is introduced in the up-down sampling stage, which can fully extract the feature information of the image. The weight of each component in the feature graph can be adjusted by adding attention module in the jump connection layer. In the process of experiment based on the model parameters are reasonable and effective set of data sets to Longnan remote sensing image change detection, and the experimental results showing that compared with the traditional deep learning semantic segmentation method, this article methods F1 value of 0.873, the generated change detection figure closer to label figure, higher accuracy, shorter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dongxing Zhao, Han Wang, Wei He, Kun Ding, Haiyan Zhang
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560I (2021) https://doi.org/10.1117/12.2626531
The intelligent recognition of rock sample lithology plays an important role in mineral resources exploration. According to the rock sample image, a depth learning model is established. In order to solve the problem of gradient disappearance caused by the excessive depth of neural network, residual structure is introduced, the ResNet structure model is built, and a comparison based self-supervised learning classification algorithm is established, which does not depend on any label value. Using the encoder network to extract features and calculate the reconstruction error in pixel space, we can obtain the ability to identify new samples. The self-supervised lithology recognition algorithm takes resnet18 as the encoder network and the public ImageNet data set as the pre-training data. The parameters are optimized by using the comparative learning gradient descent of positive and negative samples, it adopts a linear classifier, The classification accuracy of rock samples is 85%, which is higher than the classification algorithm based on resnet18 and migration learning, and provides a scientific basis for lithology identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560J (2021) https://doi.org/10.1117/12.2626480
Integrating lexicon knowledge into character-based methods can improve the performance of neural network models for Chinese named entity recognition (NER). For example, Lattice LSTM [1]and WC-LSTM [2] perform well on several public Chinese NER datasets. However, the directed acyclic graph (DAG) structure makes lattice LSTM challenging to train on minibatch. In addition, the Lattice LSTM and WC-LSTM only incorporate the word-level semantics into the representation of the first or last character in each word. The inside characters that the word contain are ignored. Besides, they have difficulty in dealing with the conflicts between potential words in the lexicon. This work proposes an attention- based hierarchical meta-embedding method (AHME) to incorporate lexicon knowledge into Chinese NER to alleviate the above limitations. The proposed model can incorporate the word boundary information into character representation and deal with conflicts between potential incorporated words. The experimental results on four datasets show that our method outperforms state-of-the-art baselines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560K (2021) https://doi.org/10.1117/12.2626439
The popularization of 5G technology and the development of mobile network devices have given spatial textual dataset more dimensions, which means that the spatial text datasets recording geographical objects are given multiple source and attributes. Data mining on these datasets has become a meaningful work. Top-k spatial keyword query as a common research using spatial textual big data, after years of development, people have proposed a large number of index framework to achieve query. However, previous work often only focused on location and small amount of text, ignoring the associations between objects in spatial textual big data. In order to mine the knowledge contained in the dataset, we propose a Top-k Frequent spatial Keyword Query (TkFKQ) algorithm to index the frequent items in spatial textual dataset. In order to achieve this index, we design an index framework for knowledge mining of spatial textual data sets. The framework combines R-tree with concept lattice for TkFKQ. A large number of comparative experiments are carried out on the real data set to evaluate the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560L (2021) https://doi.org/10.1117/12.2626656
With the development of information technology, the Internet data and resources have been greatly increased, showing the characteristics of massive amounts. In order to effectively manage and utilize these massive distributed information, content-based information retrieval and data mining have gradually become a field of attention in recent years. Among them, text classification (text categorization, TC) technology is an important basis for information retrieval and text mining. Its main task is to train the classified models based on the documents of a given category and the content of the documents, and then to judge or predict the categories of new documents through the classified models. This paper discusses the characteristics of the social network and the problems, in the future development of semantic analysis algorithms on text classification will also become more obvious.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560M (2021) https://doi.org/10.1117/12.2626440
In view of the problems of low security, low efficiency of encryption transmission and high energy consumption of data encryption transmission in traditional methods, an AES based sensitive data encryption transmission method for energy big data center is proposed. The encryption round operation of AES algorithm is used for the encryption round operation result. Based on the encryption round operation result of AES algorithm, the secret key data matrix is generated. When data is encrypted, the data matrix is traversed at any node of the encryption end, and the key data matrix is generated again. On this basis, Zigzag permutation method is used for data transmission to complete data encryption transmission. Experimental results show that the security factor and execution efficiency of the proposed method are higher than those of traditional methods, and the energy consumption of data encryption transmission is the lowest, which indicates that the proposed method can ensure the quality of sensitive data encryption transmission in energy big data center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560N (2021) https://doi.org/10.1117/12.2626437
Recent methods for multi-agent reinforcement learning problems make use of Deep Neural Networks and provide state-of-the-art performance with dedicated neural network architectures and comprehensive training tricks. However, these deep reinforcement learning methods suffer from reproducibility issues, especially in transfer learning. Since the fixed size of the network input, it is difficult for the existing network structure to transfer the strategies learned from a small scale to a large scale. We argue that proper network architecture design is crucial to the cross-scale reinforcement transfer learning. In this paper, we use transfer training with attention network to solve multi-agent combat problems from aerial unmanned aerial vehicle (UAV) combat scenarios, and extend the small-scale learning to large-scale complex scenarios. We combine the attention neural network with the MADDPG algorithm to process the agent observation. It started training from a small-scale multi-UAV combat scenario and gradually increases the number of UAV. The experimental results show that methods for multi-agent UAV combat problems trained by attention transfer learning can achieve the target performance faster and provide better performance than the method without attention transfer learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560O (2021) https://doi.org/10.1117/12.2626467
Most current digital imaging technologies are based on Bayer color filter array and interpolation algorithms to achieve color image restoration. Common interpolation algorithms include bilinear interpolation algorithm, edge-directed image interpolation algorithm, and so on. Hibbard algorithm, a kind of edge-directed image interpolation algorithm, significantly improves zipper effect and blurred edges through a gradient-based method and constant chromatic aberration ideas compared with the bilinear interpolation algorithm. However, it only considers the possibility of horizontal and vertical edges, ignoring further judgement for edges in other directions. For solving the severe problems, we propose an oblique interpolation algorithm with the addition of oblique direction gradient factors. In this way, we consider the edges in both main diagonal and sub-diagonal directions. The adjacent pixel information in these directions is used to restore the red and blue channels. The experimental results show that images reconstructed by our algorithm are closer to original images. The reconstructed images have a certain improvement on the sharpness and coherence of oblique edges and arc edges. And the problem of false color in edges with an obvious color difference on both sides is weakened.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560P (2021) https://doi.org/10.1117/12.2626809
The rapid growth of the amount of information on the Internet makes web search technology become the main means for people to obtain information. How to measure the importance of web pages efficiently and quickly has become an important topic, and has different degrees of influence on all aspects of the web page, such as web crawling, web page grading and ranking. This paper discusses an important algorithm–PageRank, based on Markov chain model then a discussion of the advantages and disadvantages of PageRank is provided here. Finally, according to the drawbacks, some corrections and improvements are also discussed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560Q (2021) https://doi.org/10.1117/12.2626529
Wind power and photovoltaic output are random, intermittent and volatile. When it is connected to the power grid on a large scale, it will have an impact on the power generation planning and dispatching of the power grid. This paper proposes a wind power-photovoltaic-photothermal combined power generation grid-connected optimal dispatch strategy. With the unique flexible energy storage function of CSP station, the fluctuation of wind power is stabilized, and the combined output of wind power, photovoltaic and solar power is stabilized. In order to improve the economic benefits of the grid-connected wind power-photovoltaic-thermal system, a wind power-photovoltaic-photovoltaic combined power generation dispatch model based on two-stage optimization was established. In the first stage, when the economic benefits of grid connection are optimal, the equivalent load peak-valley difference is reduced, the load curve of the system in the dispatch cycle is optimized, and load data is provided for the next stage. The second phase is to join the thermal power unit, aiming to achieve the lowest total power generation cost of the system. Intelligent algorithm is used to solve it, and the correctness and effectiveness of the proposed scheduling strategy are verified by simulation examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560R (2021) https://doi.org/10.1117/12.2626460
Considering that the traditional BP neural network model has the disadvantages of easily falling into local optimum, slow convergence and sensitive to change of initial input, a method combining the traditional BP neural network and genetic algorithm (GA) is introduced. Considering the disadvantage of the traditional BP neural network models such as local optimum, slow convergence and sensitive to the change of initial input, the author introduces a method combining the traditional BP neural network and genetic algorithm in this paper. When the global optimization is achieved by a genetic algorithm, the optimized weight matrix is substituted into the training network, which is used as the initial input of the BP neural network for training. The total annual precipitation from 1951 to 2019 of a meteorological station in Guangzhou, is selected as a studying example to verify the model’s effectiveness. The results show that the genetic algorithm (GA) - BP neural network method can effectively improve the prediction accuracy and enhance the prediction capability for the extreme precipitation values. Thus, the genetic algorithm (GA) - BP neural network method is more suitable for precipitation prediction in Guangzhou than the traditional BP model and has a positive effect on environment protection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560S (2021) https://doi.org/10.1117/12.2626417
Spatial-temporal trajectory data is a position sequence recording the time and space of moving objects. It is the most important data source for the study of moving objects. The analysis and mining of spatial-temporal trajectory data is a research hotspot of spatial data mining. In the process of analysis and mining, the similarity measurement between trajectories is a key problem. This paper studies the spatial-temporal trajectory similarity measurement algorithms, selects the trajectory similarity measurement algorithms based on Hausdorff distance and One Way distance as the research object, and sets three comparative experiments on the execution time, robustness and trajectory similarity discrimination effect of different location relations of the algorithm. Through the experimental results, the advantages, disadvantages and application occasions of the two algorithms are analyzed, It provides a reference for spatial-temporal trajectory analysis and mining.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560T (2021) https://doi.org/10.1117/12.2626811
In order to improve the positioning and ranging capabilities of the robot, a method of processing information based on robot radar positioning is proposed. The radar scanning is used to collect the positioning signal of the robot. The phase difference method is used to estimate the target position of the robot from the radar signal of the simulated target echo. A parameterized dual-frequency mode generator is designed to achieve accurate positioning of the robot. The basic models of single-frequency and dual-frequency mode objectives are proposed, and the initial phase of the modulation generated by the distance is obtained. The dual-frequency mode spectrum obtained from the actual data and the parameters such as the calculated distance, velocity, and acceleration are calculated and analyzed. Simulation results show that this method can improve the accuracy of robot positioning. The signal-to-noise of the radar signal output is relatively high, which indicates that the robot positioning has strong anti-interference ability, which improves the positioning accuracy of the robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560U (2021) https://doi.org/10.1117/12.2626938
With the development of intelligent manufacturing, the competitive pressure of the traditional manufacturing industry is becoming heavier. In order to save production costs, the manufacturing industry is increasingly using industrial robots to replace manual operations. The working environment of the stamping production line is extremely harsh, and the intelligent transformation of industrial robots on it is very urgent, and the effect is more obvious. With the application of industrial robot in stamping automation production line, this paper introduces the concept, significance and method strategy of industrial robot in stamping automation production line, and provides reference for the design or technical transformation of stamping automation production line.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560V (2021) https://doi.org/10.1117/12.2626755
National parks of china contains a large scale of nature resources, and right confirmation registration of them is a premise of effective protection. The goal of right confirmation registration of national park is discussed, emphasizing making up the ecological background information and protection feature. A new classification method is given to solving the overlap problem, dividing natural resources of national parks into 4 part: space resources, ecological resources, rare biological resources and other resources. A detailed information format of register sheet is given, refining the information of all kinds of natural resources. Based on this new classification method of right confirmation registration, using the big data and artificial intelligence technology, the establishment of intelligent management platform is conducive to the effective management and supervision of natural resource assets of national parks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560W (2021) https://doi.org/10.1117/12.2626754
Combining the concept of "5G Internet + food safety" with the new generation of mobile Internet technologies such as Internet of things, big data, cloud computing and QR code, this paper presents an interconnected, efficient and responsive intelligent perception and response system that breaks the traditional supervision ideas and modes. This system covers the whole process of food safety and realizes the implementation of government supervision and market subject responsibilities. The comprehensive informatization of social coordination and individual participation will promote the intellectualization of food safety management, the scientization of management means and the modernization of management ability. It is an effective way to better supervise and urge the main bodies in key fields to standardize their operations and ensure food safety.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560X (2021) https://doi.org/10.1117/12.2626940
Aiming at the problem of smart grid network information security assessment, this paper proposes a risk assessment method of smart grid network information security based on risk weight algorithm. Firstly, the information security evaluation model is established, and the existing standards are taken as the evaluation basis. Then, the hierarchy structure is established for the host security and network security, and the risk weight algorithm is used to analyze the model. The experimental results show that the evaluation method can effectively realize the quantitative evaluation of smart grid network information security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Analysis and Model Prediction Image Processing
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560Y (2021) https://doi.org/10.1117/12.2626526
Currently, there is a distinct contradiction between the massive information and the lack of personalized information. In this case, the recommendation system has been widely used as an important technology to discover the potential interests of users and recommend the items of interest to the target users. Considering that the probability matrix decomposition model can show its prediction mechanism more clearly, in this paper, the probabilistic matrix factorization model is used in the matrix factorization method. Based on the probabilistic matrix factorization, a recommendation method framework considering both extreme rating behavior similarity and rating matrix information fusion is proposed. The framework integrates the local neighbor relationship of users into the global rating optimization process of matrix factorization, and thus improves the prediction accuracy and robustness of sparse data. Simulation results show that the proposed method reduces the MAE by 0.68%, 1.12%, 2.85% and 1.19% compared with the suboptimal method. It is proved that the proposed method can effectively implement long tail project recommendation and ensure high recommendation accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121560Z (2021) https://doi.org/10.1117/12.2626482
Electro-encephalogram (EEG) is a bioelectrical signal that directly reflects brain activity, and the research of sleep stage classifying using machine learning method is a direction of EEG signal analysis. The sleep period could be split into five stages: Wake, REM, N1, N2 and N3. At first, this essay discusses the characteristics of the Sleeping EEG wave briefly. And after the relative powers (the features) of EEG signals are extracted that are obtained by utilizing the pwelch function, the data is re-arranged with features and labels into a table. Then, the linear discrimination analysis (LDA) is used to reduce the dimension of data. Finally, the classifiers are trained with the k-nearest neighbor (KNN) classification model as well as Multiple-nominal logistic regression (MLR), respectively. The validity of the classification model is evaluated by accuracy estimating, receiver operating characteristic (ROC) curve drawing and Area under Curve (AUC) calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215610 (2021) https://doi.org/10.1117/12.2626469
VR and AR are new computer technologies for creating simulated environments in the field of the shipping industry. Therefore, it is of great significance to study the application of VR and AR by 5G enabled technology with large bandwidth and low delay in ship assembly. An application framework of the digital assembly system of a marine turbocharger based on VR/AR is presented and its operation mechanism is analyzed in his paper. The application framework of the digital assembly system of turbochargers based on VR/AR mainly consists of five parts: the physical layer, data layer, system underlying layer, system layer, and application layer. System design and implementation and implementation effect of the digital assembly system of the marine turbocharger based on VR/AR are described in detail. The operation mode of the digital assembly simulation system of a turbocharger based on VR/AR is studied to improve the production capacity of turbochargers assembly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215611 (2021) https://doi.org/10.1117/12.2626476
With the rapid advancement of natural gas industry in the world, governments are paying close attention to the development of clean energy such as natural gas. The government of China also actively promotes the market-oriented reform of natural gas price. The growth rate of natural gas production in China is far less than that of natural gas consumption, which poses a threat to energy security. China is under the situation of “rich coal but short of oil and gas”.The price is a signal of market regulation, and the effective prediction of Henry Hub gas futures price can stimulate the enthusiasm of the producers to a certain extent, and also can effectively formulate a complete price strategy. Therefore, in view of the non-linear and long-term dependence of the natural gas futures price, this paper makes a prediction through the long-term and short-term memory (LSTM) model, which can be extended in time and has long-term memory function, the gradient explosion problem of traditional neural network is solved effectively. This paper selects the Henry hub 2010 -2020 natural gas futures price data in North America as an empirical study sample. The results show that the LSTM neural network has a certain practical significance and a wonderful effect in predicting the price of natural gas futures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215612 (2021) https://doi.org/10.1117/12.2626481
With the rapid development of machine learning, an increasing number of domains begin to utilize machine learning methods to simplify part of assignments, which can relieve humans' stress. In traditional domains, like medicine, machine learning methods have great potential to play a significant role in detecting diseases. Some machine learning methods have been deployed to analyze electrocardiograms (ECG) because of the impressive accuracy and speediness, which is meaningful and convenient to the medical domain. Detecting disease in ECG accurately is beneficial to prevent and cure some fatal diseases, which may save thousands of lives. In this paper, the machine learning method is used to detect Atrial fibrillation (AF), a severe heart disease damaging people's health. The model used in this paper is Convolutional Neural Network (CNN), a deep learning model. Apart from building a model to detect AF, this paper will also explore and extend the possibility of using CNN in dealing with one-dimensional data. After dealing with a large amount of original ECG data and feed them with labels to the CNN model with some specific parameters adjusted manually, the CNN model with 91.8% accuracy is trained and can be used on some specific occasions to find exceptions of the heart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215613 (2021) https://doi.org/10.1117/12.2626447
Spam email classification can effectively help us to reject useless information. Different machine learning methods have been proposed to deal with spam email classification problems. However, exploring the key parameters adjusting in the machine learning methods for spam email classification is still insufficient. Therefore, in this paper, we adjusted the key parameters of three classical machine learning methods, including SVM, Random Forest and Logistic Regression in the specific spam email classification tasks. Many tests have been done to evaluate the adjusting operation for different classifiers. The results show that bigger C value of Logistic Regression is, the higher accuracy would be. However, if C value is too big and the model would be overfitting. The amount and the depth of trees may influence the accuracy of Random Forest, where if the amount is bigger and the accuracy would be higher under a limitation. In addition, if the depth of trees is too big, although the accuracy of the model is high, and the model would be overfitting. The Linear kernel function in Logistic Regression has the best performance in the spam email classification task. Our research has a great significance to show how to adjust the parameters for classifiers in the specific classification tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215614 (2021) https://doi.org/10.1117/12.2626463
Atrial fibrillation (AF) detection is essential for the timely prevention and intervention of strokes and pulmonary embolisms. The deep learning (DL)-based assistive detection system can effectively diagnose the AF condition by distinguishing the normal and AF signals from an electrocardiogram (ECG). This paper proposes and implements a simple 1D convolutional neural network (CNN) for binary classification – between normal and AF signals. The dataset used is obtained from the Computing in Cardiology 2017 challenge on AF detection, which is composed of short ECG recordings. The proposed method indicates that the model resulted in an F1 score of 0.767 with 72.9% testing accuracy. Feature extraction from model testing yields distinctness in features between normal and AF signals. The project results have successfully validated that the proposed 1D CNN method can achieve satisfactory and reasonable performance in classifying ECG signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215615 (2021) https://doi.org/10.1117/12.2626522
Sleep is an essential physiological activity. Previous work not only uses traditional biological methods but also uses computer technology as an aid. Machine learning can process and analyze the known data and then predict the unknown data scientifically. The high accuracy of the prediction results makes people hope to use machine learning in medical research. The research content of this paper is to preprocess and classify EEG data sets. We use Support Vector Machine (SVM), Random Forest, and Multi-Layer Perceptron (MLP) to classify the processed data sets and get their corresponding results and accuracy, respectively. This study found that the deep learning method is not a good choice because of the small amount of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215616 (2021) https://doi.org/10.1117/12.2626466
Aiming at the problem of unclear observation of low-illuminance images under complex exposure conditions, this paper transforms the image to be processed into the brightness space, and uses the principle of perceptual excitation model to design a brightness mapping function to adjust the dynamic range of image pixels. At the same time, the bias power function is introduced, and the gradient image of the image is obtained through the Poisson equation. Through the adaptive gradient attenuation method, the brightness step phenomenon of the image is suppressed in the gradient domain, and the severe color fluctuation of the exposure edge is effectively controlled, and the image is strengthened. The ability of detail representation and the harmony of the overall color. Experimental results show that this method can effectively enhance low-illuminance images, avoid the interference of high-exposure sources on the global image channel, retain high details, and colors are naturally coordinated, and the overall visual effect of the processed image is good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215617 (2021) https://doi.org/10.1117/12.2626954
With the rapid development of digital imaging technology, the Bayer filter array represents color images. How to restore color images based on the Bayer filter array has become a problem. This paper proposes a novel interpolation algorithm, which leverages a dynamic-weighting mechanism to weight interpolation direction vectors. Experiments show that the improved algorithm has a significant improvement on color image restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215618 (2021) https://doi.org/10.1117/12.2626516
Cloud detection is important for the application of space-borne video remote sensing. Video data of Chinese Jilin-1 is detected through migration learning and improved Unet with fully connected conditional random field. Due to the interference of fast movement of cloud targets and satellite platform jitter in video satellite remote sensing, it is difficult for Unet network depth to effectively extract the context characteristics of cloud targets, and effect of segmentation and cloud detection is poor. To solve the problem of missing cloud target extraction features, this paper uses the VGG16 pretraining model as the backbone network of the context path, and refines the segmentation results using the fully connected conditional random field (Fully Connected / Dense CRF) to improve cloud boundary pixel localization. The results show that the proposed algorithm can effectively improve the model segmentation accuracy, where the accuracy and intersection ratio reach 92.6% and 90.9%. The proposed network has strong generalization and high practical application value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 1215619 (2021) https://doi.org/10.1117/12.2626433
Air pollution, an issue that requires world-wide attention, has been a long-lasting problem, especially in north Asia. PM2.5 concentration that exceeds the standard value has been a main cause to people’s both mental and physical health. As the environment conservation is on going, prediction models need persistent emphasis to better forecast air quality with great accuracy so that citizens can better manage life schedule and impact of air pollution can be alleviated. This paper chose Xuhui District, Shanghai as the prediction area, collected year-round data accurate to hour, chose data from numerous dimensions which covers gaseous influents and meteorological factors, and set these input values with different weights when training data based on the degree these factors lead to air quality fluctuations. Besides, the research integrated polynomial regression model and Support Vector Machine models, which are two methods with great difference, so as to compensate for each other’s prediction of PM2.5 concentration disadvantages. when combining these two models and setting different weights to the results of these two models, the new result of predicted PM2.5 is closer to the real value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561A (2021) https://doi.org/10.1117/12.2626420
This paper discusses the multi-objective optimization problem of how demand side resources respond flexibly to power grid dispatching based on market environment. Demand side resources dominated by large industrial users have great scheduling potential and response enthusiasm, especially flexible loads dominated by energy storage, adjustable loads and interruptible loads. Therefore, based on the analysis of the characteristics of flexible load participating in market aggregation, aiming at the minimum power abandonment of new energy and the lowest power consumption cost of users, comprehensively considering the power constraints of unit output and electrical equipment, the particle swarm optimization algorithm is used to verify and solve the example. Finally, the optimization goal of peak shaving and valley filling is realized to a certain extent, but there is still room for improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561B (2021) https://doi.org/10.1117/12.2626408
Reservoir pressure is an important index to reflect the size and distribution of oil displacement capacity of reservoir. It is the fundamental condition to determine the parameters of reservoir and various adjustment measures. It is also the basis to correctly evaluate the potential of various reservoirs, formulate reasonable development policies and prevent casing damage. It always plays an important role in the process of reservoir management. Normal well test data can play a full role in oilfield development. This paper mainly studies and analyzes the phenomenon of abnormal well pressure in well test data, finds out the cause of abnormal test, and provides reference for the appearance of the same type of abnormal wells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xuan Zhang, WangQun Chen, FuQiang Lin, XinYi Chen, Bo Liu
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561C (2021) https://doi.org/10.1117/12.2626465
Link prediction aims to predict whether two nodes in a network are likely to be connected, which is widely used to deal with complex networks, such as biological network analysis and social network recommendation. Most work uses network structure data and node attribute information to predict links. However, they lose too much of the network structure characteristics in network processing and do not distinguish the different importance of neighbor nodes. To address these problems, we propose a feature fusion graph attention network for link prediction (FFGAT), which trains in batches by extracting associated subgraphs. In the extracted associated subgraph, we use the double-radius node labeling method to mark the structure label for all nodes, which is used to enhance the network structure representation ability of the model. In feature fusion, we introduce a multi-head graph attention network to aggregate the node attributes and network structure attributes of multi-order neighbors. The classification predictor uses the generated node embedding to predict the link between node pairs. Experiments are performed on seven commonly used link prediction datasets. Compared with the existing baselines, our FFGAT achieves the state-of-the-art performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561D (2021) https://doi.org/10.1117/12.2626523
With the rapid development of global e-commerce, the scale of logistics industry is gradually expanding. However, the high distribution cost, unreasonable vehicle scheduling and low customer satisfaction problems restrict the sustainable development of logistics and transportation industry to a certain extent. In order to realize the rationalization of logistics transportation, reduce the cost of transportation links and improve the economic benefits of logistics enterprises, this paper studies and analyzes the vehicle routing optimization model with the lowest total transportation cost as the optimization goal. After reading a lot of literature, the author found that ant algorithm has the advantage of its powerful search ability for optimal solutions and simple principle, and genetic algorithm has the efficiency of the calculation and it can easily combine with other algorithm. Finally, two trends of VRP are presented. Firstly, the vehicle scheduling should be constrained differently. Secondly, the optimal route should avoid congestion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561E (2021) https://doi.org/10.1117/12.2626441
Nowadays, China's high-precision inertial navigation system has made great progress compared with the past, but the inertial navigation system still has defects such as insufficient accuracy, insufficient reliability and short service life, and there is still a gap with developed countries. Therefore, scholar should actively explore the development path of high-precision inertial navigation. Expert find that high-precision inertial navigation has many shortcomings. The main problem is that the massive test data in the life cycle of high-precision navigation has not been fully mined and used, which seriously restricts the development of precision, reliability and life of high-precision inertial navigation. Therefore, in the next development process of high-precision inertial navigation, it is necessary to systematically, accurately, efficiently and reasonably deal with the massive data in the design, production, test and use of high-precision inertial navigation. Big data technology can meet the requirements of analyzing and mining these massive data. High precision inertial navigation can use the key technologies in big data analysis and management to build the basic framework of big data analysis platform, so as to provide the future development direction for the optimal design, service life and service life of high-precision inertial navigation system. Starting from the description of big data technology, this paper explores the problems existing in the big data application of high-precision inertial navigation test technology, analyzes the objectives and requirements of high-precision inertial navigation system, and puts forward the ways and methods of applying big data to high-precision inertial navigation, so as to provide a clear direction for the development of high-precision inertial navigation after shampoo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561F (2021) https://doi.org/10.1117/12.2626535
Air quality is affected by different geographic conditions, human activities, and it depends on an atmospheric environment that is a complex, high-dimensional, large system influenced by multiple feature factors and intricate relationships among them, so it has extremely significant nonlinear characteristics. In this paper, we use BP neural networks in machine learning algorithms to conduct research on air quality prediction, in which we mainly use a BP neural network model with an improved PSO-GA hybrid algorithm. The model is good at dealing with nonlinear problems, has strong noise tolerance, and its prediction results have strong applicability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561G (2021) https://doi.org/10.1117/12.2626416
Land cover types provide primary data for various applications and are important input parameters for ecosystem service models; however, achieving accurate acquisition of land cover types is a hot spot and complex area for research at this stage. Although remote sensing technology can make up for the shortcomings of traditional extraction methods of land cover types and is now widely used in land cover type classification studies, there are still significant challenges to inaccurate land cover type acquisition. Therefore, in this study, based on airborne LiDAR data and Sentinel-2 data, by extracting a series of parameters, the random forest algorithm was used to explore the classification results of LiDAR and Sentinel-2 data separately, and the two data were applied synergistically in order to achieve the complementary advantages of multi-source data and thus improve the classification accuracy of land cover types in urban areas. Among the LiDAR single-type parameters, the intensity parameter model has the highest classification accuracy, with an overall classification accuracy of 80.06% and Kappa=0.7370; among the Sentinel-2 single-type parameters, the texture parameter model has the highest classification accuracy, with an overall classification accuracy of 90.34% and Kappa=0.8742; among the dual-type parameter combination models, the intensity and Sentinel-2 models have the highest classification accuracy based on LiDAR The results show that (1) for airborne LiDAR data, the classification result of intensity parameter is better than the classification result of height parameter; (2) for Sentinel-2 data, the classification result of texture information has the best accuracy, followed by the classification result of band information. (2) For Sentinel-2 data, the classification result of texture information has the best accuracy, followed by the classification result of band information, and the classification result of the red-edge vegetation index is better than that of the traditional vegetation index. (3) The synergistic application of LiDAR and Sentinel-2 data can improve the classification accuracy of land cover types, and the synergistic data classification results are better than the classification results of LiDAR and Sentinel-2 single data sources. Therefore, in future research, we should try to apply multiple sources of remote sensing data together in order to achieve data complementarity and thus improve the classification accuracy of land cover types.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561H (2021) https://doi.org/10.1117/12.2626630
In hydrological calculation, Tyson polygon method is a commonly used method to calculate basin area rainfall. However, the Tyson polygon method lacks consideration of rainfall distribution in the basin, ignores the representativeness of rainfall stations in the basin, and does not consider the influence of characteristic factors such as elevation and location of rainfall stations. Especially when the basin area is small and there are only individual rainfall stations in the basin, the Tyson polygon method will have large errors. This paper mainly uses GIS to analyze the rainfall distribution characteristics of the basin, and uses different methods to analyze the rainfall distribution characteristics of the basin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561I (2021) https://doi.org/10.1117/12.2626415
It is a serious health hazard to humans when ingesting Ruditapes Philippinarum contaminated by heavy metals. Detection of heavy metals contaminated Ruditapes Philippinarum is important and necessary. In this study, hyperspectral image and multilayer perceptron (MLP) were used to rapidly identify Ruditapes Philippinarum contaminated by heavy metals. Hyperspectral images of healthy samples and heavy metals contaminated samples were collected and input to a MLP model to rapidly detect. The experimental results showed that the MLP algorithm was better than other classical algorithms in detecting healthy and contaminated samples of Ruditapes Philippinarum with several performances of indices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561J (2021) https://doi.org/10.1117/12.2626414
National fragility is an important indicator to measure the sustainable development of a country. This paper uses principal component analysis and multiple linear regression techniques to establish a model to assess the impact of climate change on national vulnerability. First of all, this paper analyzes the principal components of 12 indicators through SPSS software, and obtains the main indicators that affect China's vulnerability. Secondly, this article defines the national vulnerability index based on the mean and standard deviation of the main indicators, thus establishing a multivariate linear model based on principal component analysis, that is, the national vulnerability evaluation model. Finally, this article chooses Albania as the research object to predict the impact of climate change on the country’s vulnerability. The results show that climate change can directly and indirectly affect national vulnerability and increase its vulnerability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561K (2021) https://doi.org/10.1117/12.2626448
Simulation of chlorophyll-a concentration is an important method for monitoring lake eutrophication. In recent years, plenty of achievements have made in the study on chlorophyll-a concentration inversion model and water quality numerical model. However, the application of these two kinds of models is limited in lakes water with small data sample. In this paper, building non-mechanism model based on the relationship between chlorophyll-a concentration and multiple water quality indexes is proved to be feasible, at the same time, linear regression (LR) model, extreme learning machine (ELM) model, and ELM optimized by genetic algorithm (GA-ELM) model were built to simulate chlorophyll-a concentration. The simulation results show that the simulation effect of ELM model is better than that of LR model, and GA-ELM model works best. In addition, genetic algorithm (GA) effectively improves the simulation effect of ELM, R2 of GA-ELM model increased by 0.1570, RMSE decreased by 7.12 μg/L, MAPE decreased by 0.7117 when compared with ELM model. The study provides a simple and effective method to simulate chlorophyll-a concentration, which can be applied for practical monitoring eutrophication in Donghu Lake.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561L (2021) https://doi.org/10.1117/12.2626461
This paper introduces the method of leg strength analysis for wind turbine units. First the method of establishing FEM models is discussed, and then environmental loads calculation methods are introduced. Take a 800t wind turbine unit for example, several conditions are checked such as jacking condition, preload condition, operating condition and survival condition. The jacking system capacity and the spudcan bearing capacity are also checked. The results show that the leg strength and spudcan bearing capacity are enough for class rules, but jacking capacity is less than demand. The methods and ideas provided in this paper have some guiding significance for wind turbine units designers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561M (2021) https://doi.org/10.1117/12.2626810
With the approach of the big data era, the risks and problems of information data security influence people’s life and even public security seriously. To relieve these issues, integrative security protection system for full life cycle of big data is proposed, of which SM Crypto Algorithm is introduced for data security protection of all phases in data life cycle, including data collection, data transmission, data storage, data exchange and sharing, and data destruction. Relying on the safety performance of SM2 (the public key cryptographic algorithm based on elliptic curves) and SM4 (the block cipher algorithm), the protection mechanism ensures the security of the identity access control and encryption of data transmission, storage, exchange and application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561N (2021) https://doi.org/10.1117/12.2626835
This paper establishes a computer art design system with extension intelligence, and introduces the idea and method of extension into art design system. From the angle of artificial intelligence of art design, this paper discusses how to generate new art cognition from art cognition, how to generate new design from design, and how to generate learning from learning. The research methods include extension of art, extension of design, extension of design ability and extension of learning. The results show that the idea and research method of extension intelligent art design not only develops the concept and thinking method of "extenics", but also provides a new idea and innovative research perspective for intelligent computer art design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume International Conference on Algorithms, High Performance Computing, and Artificial Intelligence (AHPCAI 2021), 121561O (2021) https://doi.org/10.1117/12.2626757
Estimating a 6D object pose is an important and longstanding computer vision task. Though existing deep learning -based approaches have achieved inspiring results, there is still much room for improvement. In this paper, we present to introduce attention mechanism into object pose estimation method, which enables the network extract more distinguishing features on the objects to further get accurate object 6D pose. We verify the proposed method on the LINEMOD dataset, and the results show that our method achieves state-of-the-art performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.