Post-conference note: If you attended a tutorial, we would be happy to hear about your comments, make sure to fill in this survey. Thanks!
Tutorials take place on Monday 8th October 2012. The fee for attending to 1 tutorial is 30€ if you register until July 23rd (40€ after that date). The fee for attending to 2 tutorials, one in the morning and one in the afternoon, is 60€ if you register until July 23rd (80€ after that date). This fee is not included in the Conference fee.
The morning tutorials are:
- Tutorial 1: "Leveraging Repetition to Parse the Auditory Scene" by Josh McDermott, Bryan Pardo, and Zafar Raﬁi (slides)
- Tutorial 2:"Music Affect Recognition: The State-of-the-art and Lessons Learned" by Xiao Hu and Yi-Hsuan (Eric) Yang (slides, blog post from an attendee)
The afternoon tutorials are:
by Josh McDermott, Bryan Pardo, and Zafar Raﬁi
Repetition is a fundamental element in generating and perceiving structure in music and audio in general. We propose a tutorial that begins by outlining the psychological basis for the application of repetition to audio source separation and identiﬁcation. This will be followed by an overview of a new class of practical algorithms to perform repetition-based source separation. We will conclude by linking the work in repetition-based separation to recent work in robust principal component analysis.
by Xiao Hu and Yi-Hsuan (Eric) Yang
The affective aspect (popularly known as emotion or mood) of music information has gained fast growing attention in Music Information Retrieval (MIR) community. The recent years witnessed an explosive growth of studies on music affect recognition. This tutorial provides ISMIR participants an opportunity to learn a range of topics closely involved in affective indexing of music and to discuss how findings and methods can (or cannot) be borrowed from and applied to other multimedia information types such as speech (audio), images (visual) and movies (audio-visual). Topics in this tutorial include: the most influential psychological models of human emotion; musical, personal, and situational factors of music listening that influence the perception and description of music affect; building emotion taxonomies from online music metadata and social media; best practices of constructing ground truth datasets; approaches to and tools for automatic affect classification and regression; benchmarking and evaluation; a sample of deployed prototyping systems; issues and challenges on affect analysis; and the common ground of affect in music, image and movies. All the tools and systems covered in this tutorial are open source or freeware, and the datasets are available in transformed formats (due to the copyright of the audio and lyrics). The format of the tutorial will include lectures, group discussions, demonstration of sample systems and technical results with illustrative musical examples, and spontaneous interactions between the presenters and the audience.
by Mark D Plumbley, Simon Dixon, and Chris Cannam
The need to develop and reuse software to process data is almost universal in music informatics research. Many methods, including most of those published at ISMIR, are developed in tandem with software implementations, and some of them are too complex or too fundamentally software-based to be reproduced readily from a published paper alone. For this reason, it is helpful for sustainable research to have software and data published along with papers. In practice, non-publication of code and data is still the norm and research software is commonly lost in the years following publication of the associated methods.
During this tutorial we will discuss common barriers to publication of software and data, and will present a practical hands-on session in which attendees will explore tools and methods to help them overcome these barriers. The tutorial will rapidly cover the use of version control software, code hosting facilities, aspects of testing and provenance, and software licensing for publication. Worked examples will be drawn from the music and audio fields, and hands-on help will be provided by experienced researcher-developers from the Centre for Digital Music, Luís Figueira and Steve Welburn. This tutorial will be of immediate practical interest to researchers within the music informatics community, and will also be highly relevant to research supervisors and research group leaders with an interest in policy and guidance.
by François Pachet
Jazz is a lively music genre which has long been a favorite genre of music for computer music research. This tutorial aims at explaining the basics of jazz, show why it is interesting and give Ismir attendees insights that can be helpful for designing better jazz-focused MIR systems. I will show that jazz is a game that is based, on first approximation, on well-defined rules. I will describe these rules in a non technical way, understandable by non-jazz specialists. I will also cover some more advanced topics such as the use of side-slips as a reasoned mechanism to play "outside the rules". The tutorial uses many video and audio examples throughout. It has been presented twice already with great success.