Experimental technology and digital pedagogy2016-03-31
Digital pedagogy – the use of digital tools in the humanities classroom – has gotten a lot of attention lately, especially from those of us whose work overlaps with the digital humanities. Under this broad umbrella fall a wide variety of teaching strategies, from the use of Prezi and Powerpoint to the development of assignments related to Twitter, Buzzfeed, WordPress, Scalar, or Omeka. But all these tools are well developed and relatively user-friendly approaches to digital publication. What happens when you try to bring a developing tool – one that’s still under development – into the classroom?
This semester, as part of a collaborative experiment in digital pedagogy, I worked with Tanya Clement (assistant professor in the iSchool at UT Austin) to bring a sound-analysis tool called ARLO into a graduate seminar. ARLO, which is part of the HiPSTAS project, is a tool designed to analyze large collections of audio recordings such as anthropological recordings from the Smithsonian, collections of poetry readings, or even recordings of ambient environmental sounds. Rather than identifying words in a recording, ARLO works by analyzing and recognizing patterns in sound quality. It has been used, for example, to analyze laughter, chanting, and monotony.
We brought in Dr. Clement to demo her tool for the “History of Modern Latin America through Digital Archives” graduate seminar taught by Dr. Virginia Burnett. This course is an experimental effort to integrate digital archives, historical research, and digital scholarship into a single seminar. In its first iteration (taught last spring), the course focused on conducting research using the Guatemala Police Archives. In this second phase of course design, we are working to expand its archival scope while also encouraging students to use digital tools and platforms for their final projects. To do this, we designed a workshop based on the Radio Venceremos collection of digital audio recordings of guerrilla radio from the civil war in El Salvador. In this blog post, I will briefly describe how we used ARLO in the classroom, what challenges we faced, and what we learned.
Using ARLO in the classroom
We integrated ARLO into the course in four stages. As a preliminary exercised, students were asked to listen to sound recordings from Radio Venceremos, identify interesting sounds, and annotate them in a spreadsheet. After they completed this work, Dr. Clement gave a 1.5 hour presentation on ARLO, its history, and its basic functions, then worked with the group to identify how we might use ARLO productively on the Radio Venceremos collection. The result was a list of sound categories (such as birds, gunfire, specific word sequences, music) that ARLO might be able to recognize.
Students left the workshop with a set of interesting sound categories. As a follow-up assignment, they were asked to use listen again to their audio file using the ARLO interface, and to “tag” every instance of the sounds of interest. These tags would then be used by ARLO to find more clips from the collection that had a similar sound. After the files had been tagged, we ran both supervised and unsupervised learning on the Radio Venceremos collection. Through unsupervised learning, ARLO takes bits of sound and clusters them to form categories that are similar. These categories can potentially reveal unrecognized patterns in the text. Through supervised learning, ARLO takes the tags that students produced and attempts to find similar sound clips across the collection, ideally recognizing, for example, every time a bird calls, or guns were shot.
Challenges of teaching with developing tools
David Bliss, a research assistant with the Human Rights Documentation Initiative, was faced with the monumental (and technically complicated) task of ingesting the Radio Venceremos files into ARLO. The fact that the ARLO interface is still in Beta compounded this challenge, as did distribution restrictions on the Radio Venceremos files, which can be streamed but not downloaded. Similarly, the Beta status of the ARLO interface meant that we had to be in constant communication with Dr. Clement and her team of developers as we went through the process of ingesting, tagging, and analyzing these files. We were fortunate that every team member was generous with their (paid) labor, but this is not a sustainable model for classroom instruction. What kind of up-front labor can we reasonably expect in teaching with developing tools?
The Beta interface with ARLO requires a steep learning curve for instructors and students alike. Even with careful instructions, the simple tagging assignment proved challenging as students attempted to manipulate an unfamiliar and not-fully-documented interface. At the same time, a full understanding of ARLO’s functionality would require some familiarity with both sound theory and machine learning. What kinds of skills are prioritized when teaching with developing tools?
As a result of the steep learning curve, many of our results were imperfect as best (as described below). This leads to a third question: What kind of outcomes can we expect from teaching with developing tools?
Results of the Implementation
Results from our small-scale analysis can be observed through two videos, one relating to unsupervised learning, the other to supervised learning.
Video describing unsupervised learning of the Radio Venceremos collection, by ARLO developer Abhinav Malhotra
Video describing supervised learning of the Radio Venceremos collection, by LLILAS Benson research assistant Hannah Alpert-Abrams.
Bringing a developing tool into the classroom posed new opportunities for graduate students to see into the black-box of development and design. This project drew attention to the iterative nature of digital scholarship and revealed some of the logistical challenges of conducting quantitative humanistic research. It also introduced the kind of sophisticated research that can be done using complex digital tools, and showcased innovative research being conducted on our campus.
Yet at this point, the questions of labor, learning, and outcomes remain unresolved. How do you manage labor when working with developing tools? And what kinds of skills are prioritized when bringing these tools into the classroom? It is certainly easier and less labor-intensive to teach with fully developed products like Twitter or Google N-Grams, but the rewards are different.
Would you bring developing tools into your classroom? How would you address these questions?