Research Project Spotlight: Sonic Production, Intelligence, Research, and Applications Lab

Miles Thorogood working with students on app development

Creative tasks, such as graphic design, video production, game design, and music-making have grown to be the leading professional use of computers, besides communication. Recent initiatives in artificial intelligence (A.I.) and machine learning are driving a renaissance in how creative tasks are carried out by amateurs and professionals alike.

With this in mind, Dr. Miles Thorogood is working to establish a new lab, the Sonic Production, Intelligence, Research, and Applications Lab (SPIRAL), supported by Canada Foundation for Innovation (CFI).

CFI gives infrastructure funding to create research centres and labs on campus, which can include renovating or building space, the purchase of equipment and software as well as operational funds to get spaces up and running.

SPIRAL will be located in the new Innovation Annex Building as part of the cutting-edge research infrastructure being developed at UBCO. With this funding, the space will be renovated and equipment will be purchased to include a 900 sqft performance and immersive experience space, a dedicated sound control room – boasting a 36-channel audio dispersion system, 10 ft high-definition video projection, and VR headsets for media production and perception focussed research.

It is widely recognized that augmenting and simulating human creativity is the next frontier in, explains Thorogood. “I am interested in exploring sound design to develop state of the art models and algorithms for developing new computational tools in the video game, animation and virtual reality industries.”

Dr. Thorogood is an assistant professor teaching media studies, with a background in audio engineering, computer science, and music information retrieval.

“The goal with this new space is to research and develop new computational models to simulate cognitive tasks of the creative process in sound design,” he says.

Projects in SPIRAL will focus on the development of the next generation of A.I. computer-assisted tools for sound design production in the growing field of video games and virtual reality.

“We will work to align sound design with current trends in creative A.I. by developing new state of the art sound analysis and generation algorithms and innovative production tools,” he says.

The impact of Dr. Thorogood’s research has the potential to transform industry. Instead of searching through hundreds of hours of recordings to retrieved small sections of audio that fulfill criteria based on creative selection, a sound designer will simply set the parameters of what they are seeking in terms of salient criteria and the machine will return alternatives in mere seconds. Producers of VR and augmented reality can enter descriptions of the environment and the machine will generate appropriate immersive sound. Video game developers will label objects in the environment and autonomously generate the sound scene that also responds to player and narrative.