Browse the posts in this blog by category.

Our platform targets enhancing interdisciplinary discussions regarding music emotion annotation and analysis. We mean to expose our tool to the research community and receive feedback on how to improve it: selection of music, usability, interface design, and so on. Our algorithms can only be as good as our agreement, and we need to work collectively to understand it.

Several studies suggest that the main reason people engage with music is its emotional effect. This makes the idea of computational algorithms that can predict the emotions in music particularly intriguing and provocative. These algorithms evaluate emotionally relevant acoustic features from the audio signals, and correlate them with certain emotions that the music could convey, express or induce.

The need of including context-based information to the Music Information Retrieval field, and particularly to Music Emotion Recognition (MER), has become critical. In the case of music and emotions, the strong relationship between speech and music could be considered context, since our linguistic and cultural background reflect fundamental differences in our perception of sound. This theory is known as the vocal similarity hypothesis.

Insomnio y tristeza...

In the context of growing digital media and new classification/indexing demands, the task of Automatic Instrument Recognition in the field of Music Information Retrieval (MIR) has increasing importance.