Summary: | This study aims to explore the application of deep learning models in multi-track music generation to enhance the efficiency and quality of music production. Considering the limited capability of traditional methods in extracting and representing audio features, a multi-track music generation model based on the Bidirectional Encoder Representations from Transformers (BERT) Transformer network is proposed. This model first utilizes the BERT model to encode and represent music data, capturing semantic and emotional information within the music data. Subsequently, the encoded music features are inputted into the Transformer network to learn the temporal relationships and structural patterns among music sequences, thereby generating new multi-track music compositions. The performance of this model is evaluated, revealing that compared to other algorithms, the proposed model achieves an accuracy of 95.98% in music generation prediction, with an improvement in precision by 4.77%. Particularly, the model demonstrates significant advantages in predicting pitch of music tracks. Hence, the multi-track music generation model proposed in this study exhibits excellent performance in accuracy and pitch prediction, offering valuable experimental reference for research and practice in the field of multi-track music generation.
|