Digital Musicology at Oxford Summer School
Jennifer Ward (RISM Central Office)
Monday, September 2, 2019
I’ve been wanting to attend the Digital Humanities at Oxford Summer School (DHOxSS) for some time now, thanks to glowing reviews from librarian friends. I was glad when my summer plans finally aligned with this year’s Summer School, and directly after the IAML Congress in Kraków I hopped over to Oxford. I was fortunate enough to receive a bursary to attend the Digital Musicology strand, which was convened by Kevin Page. A report summarizing my week has been published on the DHOxSS website. In this space, I’d like to go into more detail about the tools and techniques we used.
The week was broadly divided into three modules that focused on digital audio, digital notation, and digital descriptions. The format of class sessions was mixed so that lectures by a variety of scholars alternated with chunks of time to practice new tools.
Digital audio processing involves analyzing sound files in an almost mathematical way, in that the end result is numbers that can be manipulated in a database. Along the way we dug deep into the field of music information retrieval (see ISMIR and MIREX), n-grams, and the physics of audio waves. We used Sonic Visualiser and Vamp plugins to extract features from sound recordings. Then, you can ask the computer to search for patterns, such as “In nomine” melodies in a corpus of Renaissance lute music. Songle and Chordify are other projects that can recognize audio patterns, the latter of which lets you play along with YouTube videos by displaying the chords. The Baudelaire Song Project has visualization tools to let you browse 1,700 settings of the poet’s texts by decade, theme, genre, etc.
Computers can also analyze notation, if they can be taught to read it first. Frauke Jürgensen uses kern and Humdrum to encode and process mensural notation, and then you can ask the computer to count vast amounts of intervals, cadences, and other aspects that would take a human an inordinate amount of time to undertake. We used the Music Encoding Initiative (MEI) standards, the Atom text editor, the Verovio viewer, and the Music21 library (using Python/Jupyter) to encode music details in a way that allows analysis to be performed.
The last theme of the workshop focused on linking knowledge: not merely throwing information on the web, but enriching discovery and analysis through connecting with other projects and datasets. This brings us into the world of the semantic web, the linked open data cloud (RISM is here), the Resource Description Framework (RDF), SPARQL queries, and other acronyms you might have encountered if you are familiar with Big Data concepts. Wikidata has an easily understandable interface that can demonstrate the power of linked data (click on Examples) with their map of composers by birthplace.
What brought all these concepts together at the end of the week was the F-TEMPO project by Tim Crawford, which enables true musicological research to come out of all these digital tools. It is a full-text search of early music, using the ca. 320 digitized editions from Early Music Online, over 40,000 pages. It uses optical music recognition (OMR) to find similar patterns across the corpus. Running behind the scenes is Aruspix (for the OMR) and MEI. In the demo, you can take a page from an anthology and then search for similar music, thereby finding the same piece in other anthologies - even when a different (or even anonymous) composer is named, or in a different arrangement. You can try uploading your own images to search!
The fascinating thread that went through the DHOxSS week was using digital tools to ask new and different questions. Thanks to the amount of digitized data out there, whether audio, notational, or bibliographical, there are data pools (including RISM’s) that are large enough to open up new areas of inquiry.
Share Tweet EmailCatégorie: Evénements