Summary: | Detecting emotion features in a song remains as a challenge in various area of research especially in music emotion classification (MEC). In order to classify selected song with certain mood or emotion, the algorithms of the machine learning must be intelligence enough to learn the data features as to match the features accordingly to the accurate emotion. Until now, there were only few studies on MEC that exploit timbre features from vocal part of the song incorporated with the instrumental part of a song. Most of existing works in MEC done by looking at audio, lyrics, social tags or combination of two or more classes. The question is does exploitation of both timbre features from both vocal and instrumental sound features helped in producing positive result in MEC? Thus, this research present works on detecting emotion features in Malay popular music using artificial neural network by extracting timbre features from both vocal and instrumental sound clips. The findings of this research will collectively improve MEC based on the manipulation of vocal and instrumental sound timbre features, as well as contributing towards the literature of music information retrieval, affective computing and psychology. © 2014 The authors and IOS Press. All rights reserved.
|