In the vibrant landscape of music production, transforming an unmixed track into a professionally polished masterpiece is a significant challenge. This is especially true for emerging creators, who often face a steep learning curve and the requirement for expensive equipment. The art and science of mix engineering are crucial in this journey, yet remain elusive for many. Our initiative, a collaborative endeavour between RoEx and Queen Mary University of London, aims to democratise this process through a pioneering Style Transfer tool for Music Production (ProStyle).
At its essence, ProStyle is designed to encapsulate the expertise of a professional mix engineer, transferring it onto an unmixed track. This innovative venture navigates through the complexity of discerning various musical instruments and audio effects, aiming to automate a significant and challenging aspect of music production.
The cornerstone of our methodology is a symbiotic collaboration with professional mix engineers, who will licence their mixing expertise to train the Machine Learning algorithms powering ProStyle. This approach is significantly enriched by the world-class expertise from our partners at Queen Mary University of London, known for their innovative research in audio engineering, audio signal processing, and machine learning. This ethical engagement not only ensures the authenticity and quality of style transfer but also opens new vistas for professionals to monetise their skills in a rapidly changing workplace.
Our project transcends mere technical automation; it's a harmonious blend of human creativity and cutting-edge AI, aimed at propelling the music industry into a new era of innovation, inclusivity, and ethical collaboration. The ripple effects of this innovation are manifold: democratising music production, providing a time-efficient and affordable solution for high-quality mixing, and setting a precedent for ethical AI applications in the music industry.
Through this initiative, RoEx, along with Queen Mary University of London, reaffirms a steadfast commitment to revolutionising music creation. We envision a future where the fusion of AI with human expertise not only enhances the creative process but also cultivates a collaborative and ethical ecosystem in the ever-evolving digital soundscape of music production.
49,362
2023-07-01 to 2023-12-31
Collaborative R&D
There is a high demand for well-produced recordings of live musical performance. They can be made available to attendees immediately after the event, streamed to a worldwide audience, or distributed as new commercial releases. However, in live music production, one typically has many different sources, each with different characteristics. They each need to be heard simultaneously, yet contribute to a nice, clean blend of the sounds. To achieve this is very labor intensive, and requires a professional sound engineer.
Roex has a protected technology, based on years of university research, that automates much of the audio and music production process. The content of, and relationships between, all input channels is analysed in order to determine the best way in which they can be combined. It exploits best practices in audio engineering and advanced knowledge of human sound perception in order to establish constraints used to optimise the final mixed output. However, the technology has so far only been used for post-production, which has limited market potential.
The Automatic Live Music Mixing System (ALMMS) project aims to revolutionise live music engineering by delivering an intelligent system that will automatically mix live, multitrack music with minimal interaction from the sound engineer.