While many manufacturers and technology companies enthusiastically point to their AI-based products as an indicator of their cutting-edge status, explorations of computer-aided music creation have been growing since at least the 1950s.
It should be made clear from the outset that the term “AI”, when applied to music creation, encompasses a few quite different things. There is the development of so-called “intelligent” algorithms that are now commonly found in mixing or audio editing plugins.
Many of them actually rely on rich human experience to apply frequency correction or normalization presets. Elsewhere there are those authentic virtual brains, such as those driving Sonible’s stable of exemplary AI plug-ins, or LANDR’s “Synapse” mastery platform. These look at the specifics of your audio and determine the best course of action, based on their particular area of expertise.
Play it again, RAM
Beyond music production, there is the arguably separate universe of composition where AI has made a major dent. There has been a noticeable growth in the number of music-generating services capable of dropping stunningly beautiful music, without breaking a sweat. It’s all based on an abundance of theoretical know-how, coupled with the ability to tap into a pre-existing music hub.
A prime example is Aiva, a subscription music engine that relies on deep learning algorithms to improve its ability to create stunning music, and can be adapted to a huge range of genres and styles. Aiva’s virtual brain is based on a model-training model of how a human brain works – with a neural network that recalls its past experiences and past problem solving, to continually refine its results. This is often known as “reinforced” learning, in which AI works its way through massive amounts of data (like a classical music archive, in Aiva’s case).
This brain is able to recognize patterns and commonalities, such as chord structure, melody construction, and typical arrangement choices. Armed with this knowledge, he is then able to create a similarly structured variant. In 2022, these AI-generated tracks are increasingly indistinguishable from music composed by human beings.
The idea of a machine capable of generating music dates back decades, with a major first step being the programming of the Illiac Suite (String Quartet No.4) by university professors Lejaren Hiller and Leonard Issacson. This initial step would eventually lead to David Cope’s “Experiments in Musical Intelligence” (EMI) program in 1997. This was intended to initially analyze the composer’s original scores and initiate new variations based on them to aid his productivity.
Before long, the EMI software concept was able to stunningly reproduce the intricacies of classical composers. “When I started working with Bach and other composers, I did it for one reason: to refine and help me understand what the style was. No other reason,” Cope explained. Despite its singular original purpose, the new paradigm of data-driven virtual music creation had been established.
While the rise of generative music platforms capable of easily creating out-of-the-box music has been controversial, particularly among soundtrack composers, the simultaneous emergence of machine learning tools that work with the composer and producer was more warmly received. Tools such as Orb Plugins Producer Suite 3 (opens in a new tab)seek to make composition easier by building new melodies, chord sequences, keys, arpeggios, and synth sounds that can dramatically enhance a project.
Rather than handing over all the tasks to the AI, the user is asked to scale the parameters with which their digital brain works. This AI will usually reshape itself based on the kinds of arrangements you concoct, serving up new diversions which, in the case of Orb, are meant to enhance the expression and cohesion of the resulting composition.
While music theory is one aspect of track building that the AI can undoubtedly soften, creating beats is also within its purview. One such plugin is Audiomodern’s Playbeat 3, a fantastic beat-making toolkit, complete with an incredibly manipulative step sequencer. Playbeat’s intelligent algorithms work simultaneously to develop or totally remix your rhythm designs at an incredibly fast pace. This type of virtual suggestion maker represents AI in perhaps its most constructive human form. Although the idea that the “always watching” learning algorithms of Playbeat (and similar software) are watching your every move might be a bit unsettling for old-school sci-fi aficionados.
What are the chances?
One of the main benefits of AI in composition is its ability to conjure up new ideas beyond the ruts we all find ourselves in over the years. The imperative to shake up the standard creative paths – exemplified by the chord sequence suggestive of software like Amadeus Code and the complex arrangement shaping of Orb Composer – long predates the advent of AI.
In the early 1950s, experimental composer John Cage created his extraordinary music of changes marking passages of the ancient digital generator, the I Ching, and applying its complex symbol system to musical elements like tempo, pitch, dynamics and more.
Fast forward to the 1970s, and ambient titan Brian Eno used the much more precise “Oblique Strategy” cards, each containing a written direction, which could be interpreted by himself or other musicians in interesting ways. The intention was to take them out of their comfort zone.
Examples included “Honor your mistake as a hidden intention” and “Use an old idea”. Cage and Eno’s forays into outsourcing decision-making laid the groundwork for how artificial intelligence would be trusted to guide contemporary music makers. Although not computerized, they were the precursors to many of the principles of today’s AI composition makers.
At MusicRadar, we are determined to focus on good that AI and intelligent algorithms can bring to the table, many still balk at the idea of a future in which a multitude of mechanical minds will carve and trace our footsteps. While these music-generating platforms face considerable scorn from some, others are directing their anger at the automation of intelligent algorithms.
Mastering platforms are particularly criticized. As the continued advancement of algorithmic platforms such as LANDR and eMastered leads them to become smarter, the companies themselves offer an olive branch to traditional mastering engineers. As Colin McLoughlin of eMastered said The edge in an interview, “For the absolute best mastering, a traditional mastering engineer will always be the
Ode to code
As these myriads of AI algorithms continue to learn and increasingly infiltrate all facets of music creation, it’s likely that machine-assisted workflows will become more normalized. more over the next few decades. While there are legitimate conversations to be had about the impact of generative music platforms on professional songwriters, the application of AI-assisted algorithms can undoubtedly be a major benefit for all.
Over the next few days and weeks on MusicRadar, we’ll dive deep into some of the best AI-powered software, platforms, and apps currently available, and illustrate how and why the spread of AI in the world of music tech has become a strength. for saving so much time and ideas.
6 innovations in music AI
1. Iliac 1 (1956)
Lejaren Hiller’s Unveiling Iliac Suite was a fundamental moment in the development of AI. The computer that composed the piece – the ILLIAC 1 – was the very first “supercomputer” used by an academic institution in 1956. It composed the sequel by organizing an array of algorithmic choices, provided by Hiller. Although baffling to many at the time, the work of this computer is now considered surprisingly innovative.
2. Verbasizer by David Bowie (1995)
An artist always ahead of his time, David Bowie decided to modernize his “cut-up” lyric writing technique in the 1990s. By 1995, many of Bowie’s lyrics were being generated in tandem with this simple lyric mixing software . While not “smart” per se, Bowie’s openness to creative collaboration with technology while writing offered a tantalizing glimpse into the future.
3. NDE (1997)
David Cope’s Experiments in Musical Intelligence program was originally designed to offer new creative avenues when inspiration was lacking, but what Cope stumbled upon was the way to emulate the style of any artist whose work was introduced into the world. software. Cleverly recognizing the patterns, the software was then able to generate new compositions that reflected the style. Computer-generated music was being born.
4. Continuator (2012)
First designed in 2001, the Continuator was machine learning software. He collected data on how musicians played and learned to intelligently supplement it by improvising in real time using the collected data on how they played as a model. Its inventor, François Patchet, was to be a major player in the development of Spotify.
5. Zynaptiq Adaptiverb (2016)
Among the first major applications of AI in music mixing plugins, Zynaptiq’s Adaptiverb inspects the audio source and, using the Bionic Sustain resynthesizer, creates the perfect, bespoke reverb tail, using hundreds of network oscillators. This set a new precedent by highlighting the practical benefits of machine learning across production.
6. iZotope Assistive Audio Technology (2016)
A company that has embraced the time-saving potential of AI like few others, iZotope’s helpful assistive audio technology is the secret sauce behind such flagships as audio-enhancing RX and voice-perfecting Ozone. his. By intelligently analyzing audio and applying quick fixes, iZotope has likely collectively saved professionals years of valuable studio time.