Paper
9 January 2024 A music generation model based on Bi-LSTM
Yong Bai
Author Affiliations +
Proceedings Volume 12969, International Conference on Algorithm, Imaging Processing, and Machine Vision (AIPMV 2023); 129691T (2024) https://doi.org/10.1117/12.3014368
Event: International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023), 2023, Qingdao, China
Abstract
The unidirectional LSTM based music generation model does not take into account the influence of future information when generating music. It solely focuses on learning the dependencies of the current moment on past information, resulting in music with poor stability and subpar quality. To address this issue, we have developed a music generation model based on bidirectional LSTM. During the training phase, this model effectively captures musical information from both past and future time steps, resulting in a probability distribution of musical elements that closely approximates real-world music. This, in turn, leads to enhanced structural stability and improved music quality in the generated compositions. Finally, we conducted validation experiments on our proposed approach, and the results unequivocally demonstrate its effectiveness.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Yong Bai "A music generation model based on Bi-LSTM", Proc. SPIE 12969, International Conference on Algorithm, Imaging Processing, and Machine Vision (AIPMV 2023), 129691T (9 January 2024); https://doi.org/10.1117/12.3014368
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top