CCRMA’s Music and the Brain Symposium showcases new advances in music engagement research

July 24, 2017, 2:11 a.m.

The sounds of drums blended with discussion about music information research at the scenic Knoll during this year’s Music and the Brain Symposium. The event’s theme of “engagement” research honored findings by Stanford’s Music Engagement Research Initiative (MERI) and was evident in everything from the formal events to the instruments available for attendees to play even before the symposium officially began.

Covering everything from research to applications in the future and present, the symposium brought people together from both backgrounds of music and engineering, which are often seen as separate, into a conversation about looking at ways to further engage audiences through their intersection. The symposium is held every year by the Center for Computer Research and Acoustics (CCRMA).

The theme of “engagement” had different meanings throughout the symposium, but to speaker Jacek Dmochowski, an assistant professor of biomedical engineering at the City College of New York, viewers’ engagement in visual signals, or movie clips, offer glimpses into their emotional experiences. Dmochowski talked about his research and how he hoped his studies could allow researchers to interpret people’s emotions in non-verbal ways, possibly creating avenues for “listening to” people who are unable to communicate with the outside world.

“The holy grail of this kind of research is to allow people that aren’t able to speak or communicate–people in vegetative states, locked-in syndrome–to be able to communicate with the outside world,” Dmochowski said. “If somebody that is unable to communicate because of paralysis, for example, may still be able to control their thoughts. If we could read out essentially their intent, what they’re imagining, that would be obviously a very meaningful thing to accomplish.”

Experts at MERI are pursuing similar avenues of research. Speaker Blair Kaneshiro BA ’05, MA ’10, MS ’11, Ph.D. ’16, now a postdoctoral scholar at MERI, discussed how in recent years her team has been able to show how changes to musical elements, such as rearranging the music and changing its resonance, can impact a listener’s level of engagement.

Although not yet published, Kaneshiro said that the data seem to suggest that people’s engagement increases with music that is unpredictable, at least the first time they listen to it. Working on this research has changed how she thinks about music, too.

“I start wondering how [the music I listen to] would perform in an experiment, and how it might be useful or how it could be manipulated,” she said.

Kaneshiro, who was also an event organizer, shared that she was extremely excited to bring together all of the speakers through the event.

“I knew that if they could just get together in the same room, some really interesting conversations would happen,” she said. “Then maybe some new research ideas could come out of it. So I’m hoping that we’ll see some new collaborations.”

Industry experts, including two guest speakers, Douglas Eck and Sageev Oore, came from Google’s Magenta project, which is developing machines that compose music. According to the speakers, the group has made progress in making these machine-made compositions sound more organic. Both musicians, Oore and Eck (who is a programmer as well), wanted to make sure that they weren’t replacing musicians with their product but rather providing an additional tool for musicians to use when creating music, bringing technology and music together. They hope to be able to make these machines more accessible as the digital world expands.

“As computation moves from the server room to the desktop to the laptop to in your pocket, by analogy, we take machine learning and move it closer to people’s lives and ask musicians to actually create with this stuff,” Eck said.

Psychotherapist Renée Burgard ’73 and her friend K. Gopinath ’88, who is now with the Indian Institute of Science came in search of answers to their own personal questions regarding music in their lives.

Burgard’s favorite music is an Indian musical genre called Rasa, which she says gives her a feeling like no other Western music does, and she came hoping to figure out why. On the other hand, Gopinath is interested in how his own experience with music blends with research. He is interested in whether the “deep sense of feeling” he has when listening to Indian music has a mathematical structure.

The possible applications of engagement research were also shown in the poster session, where Srishti Birla and Karanver Singh, who previously worked with CCRMA as an intern, showcased their project which provided dance therapy for kids with autism.

Both dancers, Birla and Singh wanted to share their passion for dance with the community and realized that many autistic kids never had an avenue to explore dance that was geared towards their needs.

“We had students who had social issues and anxiety issues, and with the help of dance and this therapy and this class, parents actually reported that it made a difference outside of the classroom,” Singh said. “One student actually uses our dance routine as a tool to calm himself down from his anxiety now.

Birla said that the best part of the project was seeing the kids’ joyful reactions.

“It was really nice to see all the excitement in the children,” she said. “For one of the children, her mom literally was like ‘Please start another session because she really really wants to come back,’ and it made us feel so nice that all of our time and all of our hard work went into something that they really really liked.”

From research in people’s engagement when consuming media to applications for this research to be used as tools to further communication in many forms, the symposium brought together music research and experimentation, looking at ways that engagement measurements can inform the future of music.

 

Contact ZaZu Lippert at zazulippert ‘at’ gmail.com.

Login or create an account

Apply to The Daily’s High School Summer Program

deadline EXTENDED TO april 28!

Days
Hours
Minutes
Seconds