In credit card, shopping, and insurance systems, a small amount of fraud may sometimes exist in the massive transaction data, so fraud detection is widely used in these systems. Extreme Gradient Boost (XGBoost) is outstanding in the competition, many scholars use it in various different fields of classification and prediction. The hyperparameters set by XGBoost determine its performance strength or weakness, and the many hyperparameters make it difficult to have good choices. In this paper, we use the improved Lévy sailfish optimizer (LSFO) to solve the parameter selection problem. As a relatively new intelligent optimization algorithm, the sailfish optimizer chooses spiral updating to optimize the search process of the sailfish and adds Lévy flights to increase its performance. To validate the improved performance, several common optimization algorithms such as PSO, HHO, and WOA are compared using the Lévy Sailfish Optimizer. A fraudulent dataset is used to evaluate the performance of Lévy Sailfish Optimizer Optimization XGBoost (LSFO-XGBoost) in comparison with Decision Tree (DT), Random Forest (RF), SFO-XGBoost and PSO-XGBoost. In LSFO-XGBoost hyperparameter optimization, due to its long training time, the distributed Spark framework is applied to improve its training in a distributed manner. The experimental results show that the proposed LSFO is effective in solving the XGBoost hyperparameter problem, and the distributed LSFO-XGBoost algorithm is better in terms of time consumption and performance.
Compared with the traditional knowledge graph-enhanced recommendation method, this paper introduces a multi-task learning module to alternately train knowledge graphs and recommendations to alleviate the data sparsity and cold start problems in traditional recommendation methods. Specifically, in the multi-task learning module, the item features and contextual content features are taken, and the features after feature interaction are obtained using the interactive attention network, as a way to learn finer-grained features, and then the gating mechanism processes the item features and entity features that fuse the contextual content, which can filter the unimportant features and obtain the important potential features, and can capture the implicit higher-order feature interaction more effectively. Optimized for multi-task learning tasks. The validity of our model was verified on three publicly available datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.