Four fairly simple but intelligent tools for writing music for your movie masterpiece
- Transfer
Artificial intelligence creativity now includes the use of music-writing software that anyone can control
In the good old days, you could spend the day filming your own video with the participation of your family, compose music for him using cuts from your favorite songs and upload it to the Internet, sharing it with everyone without any problems. But after the robotic tools for tracking copyright compliance began to scan new downloads, the public gradually began to worry more about the proper licensing of songs, and now it’s not possible to use any kind of music for a publicly available video.
What should a beginner, at best, a musician do? I marched in college, so I understand what rhythm, phrasing and tempo is, but my ability to create music stopped somewhere at the high school level.
Fortunately, we, authors of films and podcasts with problems with music, now have special robots. Recently, several good-quality AI projects have appeared - one of the most notable, perhaps, will be Sony's Flow Machines, whose debut album was released in January - and these tools are slowly but surely moving from research laboratories and professional studios to the public.
So, when I was recently left on my own at the weekend, and only my dog and the desire to eat in the evening made up the company, I decided that the time had come to return short films to the stage. My documentary about Ernie from the Shih Tzu breed is voiced completely with the help of smart music writing tools that everyone has access to right now.
Chrome song maker




Song Maker was released March 1, and is the newest of the technical tools for writing melodies. Technically, this is not an AI composer. This is a tool that simplifies composition to the point where everyone can do it, regardless of whether you know things like tone or rhythm. It greets users with a grid representing measures along the x axis and tonality along the y axis, and filling any square will produce a sound. The choice of instruments is limited to five tones (piano, synthesizer, strings, marimba, woodwind) and four drums (electronic, xylophone, drums and kong). You can enter data with the mouse, you can use the MIDI keyboard or sing into the microphone, you can even use the keyboard.
Song Maker is part of the Chrome Music Lab, Google’s browser-based project to help you learn the basics of music. The Music Lab already has half a dozen instruments focused on a variety of topics, from oscillators and rhythm to arpeggio . And although they are aimed approximately at the elementary school level , the Song Maker program has enough settings for any beginning composer to work effectively. You can loop up to 16 measures, resize to some exotic one like 5/4 or 12/8, split the notes into trioli or sixteenths, select the key and the starting note, and use three octaves.
As for my task, Song Maker allows you to select only one tonal and percussion instrument. I needed something more juicy, something from the category of espionage thrillers, for the moment when the dog reveals his work. Ideally, I wanted to combine my basic rhythm with high octave strings. But Song Maker does not allow you to overlay tracks on each other and create more serious compositions, even if you choose the same number of measures and tempo. To do this, you need a simple editor program and the ability to record the sound produced by the browser. The limitations of Song Maker turn out to be positive and negative - everything is simple enough to create something unsuitable for listening, but the compositions have a ceiling of complexity.
Google magenta tools





Google would not be Google if the company did not have several projects to update the process of creating music. Compared to Chrome Music Labs, Magenta has a more direct approach.
The project was first presented in 2016 at Moogfest (an increasingly popular conference for music technology lovers). Magenta aims to take advantage of AI and machine learning to give everyone the ability to create music.
“The goal of Magenta is not just to create new music-generating algorithms, but to“ close the creative loop, ”the team wrote in the announcement of the N Synth instrument. “We want to equip creators with tools created using machine learning that inspire future research directions. Instead of replacing human creativity with AI, we want to provide our tools with a deeper understanding so that they are more intuitive and inspiring. ”
The N Synth instrument , which allows you to cross two instruments and get a new sound, is just above my abilities. The same applies to other tools, access to which anyone can get deeper into Magenta's work - the team releases all tools and models for free access on GitHub. For my purposes, I selected two tools from the project that are in working condition: Infinite Drum and AI Duet.
Infinite Drum is essentially a drum machine made from everyday sounds organized using machine learning. “The computer was not given descriptions or tags - only sounds,” the description says on GitHub. “Using t-SNE technology, the computer puts similar sounds closer together. You can use the map to explore the surroundings of similar sounds and even create a rhythm with a sequencer. ”
From the user's point of view, the tool becomes extremely simple. You scroll through the list in search of a category of sound, and select four sounds to combine. You can choose a random mix or change the tempo, but in general you just click “play” and get a looped rhythm.
AI Duet, on the contrary, helps you create the perfect melody. You press any keys, having the skills of playing the piano at any level, and the instrument, using machine learning, plays you the answer, trying to create a melody. AI uses machine learning on a bunch of songs to then respond to your own input (given tonality and rhythm). As the tool’s developer, Yotam Mann, says, “It’s fun to even just play keys randomly. The neural network is trying to produce something coherent based on any input. ”
These programs, like Chrome Music Labs, are more specific tools than complex creative platforms. Therefore, to create the final product, I needed a separate program for editing audio.
Amper AI





If the listed tools seem too complicated for you, do not worry. While Google is promoting its Magenta, and Sony is handing AI to the hands of real musicians, there’s a real project for everyone else to create music: Amper AI .
This is not the first project to compose music using AI, which has become accessible to everyone (for example, the British who made Jukedeck have been working since 2015), but this is the best combination of simplicity, settings and final quality. Last year, Amper hit the headlines when youtuber Tarin Southern created a single that was virtually indistinguishable from those songs that make it to the Top 40. Southern is about to releasean entire album recorded using AI called "I Am AI". Different compositions helped create tools such as Amper, IBM Watson, Aiva and Google Magenta.
"Our goal is to make music no worse than John Williams , and make it sound like it was recorded at Abbey Road Studios and produced by Quincy Jones , " Amper CEO Drew Silverstein told me in December. "We strive for similar musical standards, but "Of course, they have not yet been reached. Much more needs to be done on the musical side of the project before Amper can not be distinguished from music created by people, and then it will be the most valuable tool for creative personalities."
The company was founded by musicians, not programmers, so Silverstein and his comrades consider their AI to be a tool, not a replacement (which is why they collaborate with Suthern instead of simply spinning up a single like a bot's work). They trained their AI on the work of real composers in the same way that AlphaGo trained on games from real matches. As a result, a tool appeared that even in the simplified beta version offers a lot of settings and creates almost perfect and useful melodies.
Amper now has two different interfaces to choose from: Simple and Pro. The first allows you to choose between predefined styles and set the duration of the melody. The second allows you to start with similar parameters related to the style, but then you can dig deeper and change, apparently, in general everything: the number of instruments, types of instruments, tonality, temporal characteristics, and so on.
For 30 seconds in my video you can hear the result of how a simple version of Amper realized the chosen style of “modern folk music”. At around 2:00, an inspirational song created in Amper Pro sounds. I imagined brass instruments in the style of something like “Gonna Fly Now,” but the current versions of AI composers fail to produce realistic wind instruments. And yet in this case, the composition was not the worst.
Will I get an Oscar soon for the best original song using AI tools? Probably not. But even in the early stages, these technological toys for lay composers work. They are simple enough so that I can not spend the whole day on a melody lasting 30 seconds, and I do not need to understand anything at all in the theory of music. However, these tools are enough so that I do not get upset that in the movie about Ernie I can not use the music written by Wagabon . True musicians will probably always be invited in the first place, even if there are AI composers, but the situation of creative people writing music on weekends will get better.