Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres
Detecting emotion features in a song remains as a challenge in various area of research especially in music emotion classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligence enough to learn the data features as...
Published in: | Frontiers in Artificial Intelligence and Applications |
---|---|
Main Author: | |
Format: | Conference paper |
Language: | English |
Published: |
IOS Press BV
2014
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-84948822822&doi=10.3233%2f978-1-61499-434-3-3&partnerID=40&md5=211b62ff8b9926e7fefd4a90983d3571 |
id |
2-s2.0-84948822822 |
---|---|
spelling |
2-s2.0-84948822822 Mokhsin M.B.; Rosli N.B.; Wan Adnan W.A.; Abdul Manaf N. Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres 2014 Frontiers in Artificial Intelligence and Applications 265 10.3233/978-1-61499-434-3-3 https://www.scopus.com/inward/record.uri?eid=2-s2.0-84948822822&doi=10.3233%2f978-1-61499-434-3-3&partnerID=40&md5=211b62ff8b9926e7fefd4a90983d3571 Detecting emotion features in a song remains as a challenge in various area of research especially in music emotion classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligence enough to learn the data features as to match the features accordingly to the accurate emotion. Until now, there were only few studies on MEC that exploit timbre features from vocal part of the song incorporated with the instrumental part of a song. Most of existing works in MEC done by looking at audio, lyrics, social tags or combination of two or more classes. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? Thus, this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. © 2014 The authors and IOS Press. All rights reserved. IOS Press BV 9226389 English Conference paper |
author |
Mokhsin M.B.; Rosli N.B.; Wan Adnan W.A.; Abdul Manaf N. |
spellingShingle |
Mokhsin M.B.; Rosli N.B.; Wan Adnan W.A.; Abdul Manaf N. Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
author_facet |
Mokhsin M.B.; Rosli N.B.; Wan Adnan W.A.; Abdul Manaf N. |
author_sort |
Mokhsin M.B.; Rosli N.B.; Wan Adnan W.A.; Abdul Manaf N. |
title |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
title_short |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
title_full |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
title_fullStr |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
title_full_unstemmed |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
title_sort |
Automatic music emotion classification using artificial neural network based on vocal and instrumental sound timbres |
publishDate |
2014 |
container_title |
Frontiers in Artificial Intelligence and Applications |
container_volume |
265 |
container_issue |
|
doi_str_mv |
10.3233/978-1-61499-434-3-3 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-84948822822&doi=10.3233%2f978-1-61499-434-3-3&partnerID=40&md5=211b62ff8b9926e7fefd4a90983d3571 |
description |
Detecting emotion features in a song remains as a challenge in various area of research especially in music emotion classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligence enough to learn the data features as to match the features accordingly to the accurate emotion. Until now, there were only few studies on MEC that exploit timbre features from vocal part of the song incorporated with the instrumental part of a song. Most of existing works in MEC done by looking at audio, lyrics, social tags or combination of two or more classes. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? Thus, this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. © 2014 The authors and IOS Press. All rights reserved. |
publisher |
IOS Press BV |
issn |
9226389 |
language |
English |
format |
Conference paper |
accesstype |
|
record_format |
scopus |
collection |
Scopus |
_version_ |
1809677609982754816 |