Doodle a drum or saxophone to evoke a multi-instrumental composition. Look into the webcam, speak, and watch your mouth move across the screen – the doorway to a heartwarming chain reaction.
This is what visitors to the MIT Lewis Music Library experience when they interact with two new digital installations, “Doodle Tunes” and “Sounds from the Mouth,” created by Center for Art and Technology (CAST) 2022-23, visiting artist Andreas Refsgaard in collaboration with Music Technology and Digital Media Librarian Caleb Hall. The residency was initiated by Avery Boddy, division chief of the Lewis Music Library, who recognized Refsgaard’s flair for revealing the playfulness of emerging technologies. The intricacies of coding and machine learning may seem daunting to newcomers, but Refsgaard’s practice as a creative programmer, interaction designer, and educator seeks to open the field to all. Featuring workshops, artist talks, class visits, and an exhibition, the residency was instilled in his unique sense of humor – a blend of lively quirkiness and easy-going levity.

Play Video
Machine learning and the arts with MIT Visiting Artist Andreas Revsgaard
Learning through laughter
Headquartered in Copenhagen, Revsgaard is a true innovator in machine learning. “I’m interested in the ways we can express ourselves through code,” he explains. “I love making unconventional connections between inputs and outputs, where the computer acts as a translator—a gadget might let you play music with your eyes, or a love poem might be born out of a picture of a burrito.” Refsgaard’s own role in innovation is not directly about solving problems or launching startups that change the world. Instead, it seeks to simply “click the can,” providing accessible open-source templates to prompt new creative ideas and applications.
Programmed by Refsgaard and featuring a custom set of sounds created by Hall, “Doodle Tunes” and “Sounds from the Mouth” demonstrate how original compositions can be created through a combination of spontaneous human gestures and algorithmically produced output. In “Doodle Tunes”, a machine learning algorithm is trained on a data set of drawings of different instruments: piano, drums, bass guitar or saxophone. When the user draws one of these images on a touch screen, a sound is generated; The more tools you add, the more complex the configuration. Voices of the Mouth works by tracking your face and taking selfies. When the participant faces a webcam and opens their mouth, an independent jumping shot of the piano notes is created. To try the projects out for yourself, scroll to the end of this article.
Unlimited libraries
Saxophone creaks and digital drum beats aren’t the only sounds that emanate from the areas where the projects are installed. “My office is nearby,” Hall says. “So when I suddenly hear laughter, I know exactly what’s up.” This new sonic dimension of the Lewis Music Library fits within the ethos of the environment as a whole – conceived as a campus hub for acoustic experiments, the library was never intended to be completely silent. Refsgaard’s residency is a new emphasis on progressive programming led by Boddie, as the library’s strategy shifts toward a focus on digital collections and music technology.
“In addition to serving as a space for quiet study and access to physical resources, we want the library to be a place where users gather, collaborate, and explore together,” Bodhi says. “This residency was very successful in that regard. Through the workshops, we were able to connect individuals from all over the MIT community and their unique disciplines. We had people from the Sloan School of Management, the Schwarzman School of Computing, and the Music and Theater Arts all working together, causing in the chaos, and create tools that sometimes work…and sometimes don’t.”
Error and coincidence
One of the main features of Refgaard’s work is the integration of error. The occasional glitches are part of the technical finesse, and they also dampen the hype around the AI; An algorithm is only as good as its set of data, and that set is affected by human biases and omissions. During an artist public discussion, “Machine Learning and the Arts,” audience members were initiated into Refsgaard’s anomalous art model, and introduced to projects such as Booksby.ai (an online library of science fiction novels generated by artificial intelligence), Is It Funny? (an attempt to distinguish between “fun” and “boring” images), and an Eye Conductor (an interface for playing music via eye movements and facial gestures). Faults in the exhibit’s installations were frankly acknowledged (it’s true that “Doodle Tunes” is sometimes mistaken for a saxophone depiction of a squirrel), and Refsgaard encouraged audience members to suggest possible improvements.
This open-minded attitude set the tone for the “Art, Algorithms, and Artificial Intelligence” and “Machine Learning for Interaction Designers” workshops, which aim to be suitable for newcomers as well as curious experts. Refsgaard’s visits to music technology classes explored the ways human creativity can be amplified through machine learning, and how to navigate the sliding scale between artistic intent and unexpected results. “As I see it, success is when participants interact with the material and come up with new ideas. The first step in learning is understanding what is being taught – and the next step is applying that understanding in ways the teacher wouldn’t have expected.”
Uncertainty and opportunity
Refsgaard’s work embodies some of the core values and questions central to the evolution of MIT libraries—issues of digitization, computation, and open access. By choosing to give free access to his playful demos, he is relinquishing ownership of his ideas; The machine learning model may act as a learning device for the student, and it may also be invested in by the company. For Refsgaard, play is a way to deal with the ethical implications of emerging technologies, and Hall found himself grappling with these questions in the process of creating the sounds for the two installations. “If you write the audio samples, but someone else arranges them as compositions, who owns the music? Or does the AI own the music? It’s a very interesting time working in music technology; we’re entering uncharted territory.”
For Refsgaard, uncertainty is the secret sauce to his algorithmic technical prowess. “I like to make things where I’m surprised with the end result,” he says. “I’m looking for that sweet spot between something familiar and something unexpected.” As he explains, the big surprise simply amounts to the noise, but there is something amusing in the possibility that an instrument might mistake a saxophone for a squirrel. The task of the creative programmer is to constantly adjust the relationship between human and machine capabilities – to find and follow the music.
“Doodle Tunes” and “Sounds from the Mouth” are on display at the MIT Lewis Music Library (14E-109) through December 20th. Click the links to interact with the projects online.