"Machine Sound": synthesizers based on neural networks

    The developers of the research project Magenta (a division of Google) presented the open source synthesizer NSynth Super. It is based on an artificial intelligence system that mixes several preloaded samples (for example, the sound of a guitar and piano) into a new sound with unique characteristics.

    In more detail about the NSynth Super system and other algorithms composers we will tell further.


    Photo Ta Da CC

    More about NSynth Super


    The NSynth Super has a touchscreen display that displays a square "work surface". The musician selects several instruments, the sound of which will be used to create a new sound, and assigns them to the corners of this square.

    During the performance, the performer controls the reproduced sound by moving the pointer within the working field. The resulting sample will be a combination of the original sounds in different proportions (depending on the proximity of the cursor to a particular angle).

    New samples are synthesized using the NSynth machine learning algorithm . He studied 300 thousand instrumental sounds using the open libraries TensorFlow and openFrameworks. His work also uses the WaveNet model .

    To generate new samples, NSynth analyzes 16 characteristics of incoming sounds. They then interpolate linearly to create mathematical representations of each audio signal. These representations are decoded back into sounds that have the combined acoustic qualities of those that have an input to the algorithm.

    You can use the NSynth Super with any MIDI source: for example, a DAW, synthesizer, or sequencer. How NSynth Super works, you can see in this video . In it, the performer "mixes" the sounds of sitar , electric piano , etc:


    NSynth Super is an experimental tool, so it will not be sold as a commercial product. However, its code and build scheme are laid out on GitHub .

    Who else uses MO to create music


    In the project, Magenta is working on other technologies related to machine learning. One of them is the MusicVAE model, which can “mix” melodies. Based on it, several web applications have already been created: Melody Mixer , Beat Blender and Latent Loops . MusicVAE (and other models from Magenta) are collected in the open library Magenta.js .

    Other companies are working on algorithms for creating music. For example, in Sony Computer Science Laboratories implement the project Flow Machines . Their AI system is able to analyze various musical styles and use this knowledge to create new compositions. An example of his work may be the music for the song Daddy's Car in the style of The Beatles.


    Within the framework of the Flow Machines project, several applications have been created, for example, FlowComposer , which helps musicians to write music in a given style, and Reflexive Looper , which independently supplements the missing instrumental parts. With Flow Machines, even the Hello World music album was recorded and released .

    Another example is the startup Jukedeck . He develops a tool for creating compositions with a given mood and tempo. The company continues to develop the project and invites developers and musicians to cooperate . Here is an example of a composition created by the Jukedeck machine learning algorithms:


    A similar tool is created by Amper . The user can choose the mood, style, pace and duration of the composition, as well as the instruments on which it will be “played”. The app synthesizes music according to these requirements.

    Above the AI ​​systems for writing music and the company Popgun . They develop algorithms that can write original pop songs. Also research in this area is carried out by the streaming giant Spotify. Last year, the company opened a laboratory in Paris, which will be engaged in the creation of tools based on AI systems.

    Will AI replace composers?


    Although some companies are developing algorithms for creating music, their representatives emphasize that these tools are designed not to replace musicians and composers, but rather to give them new opportunities.

    In 2017, the American singer Taryn Southern released an album recorded using artificial intelligence systems. Southern has used tools from Amper, IBM, Magenta and AIVA. According to her words , this experience was like to work with a person who helps to create music.

    In this case, not only composers, but also other specialists from the music industry can use machine learning algorithms . Neural networks are better than people to cope with the classification of objects. This feature can be usedmusic streaming services for determining genres of songs.

    Moreover, with the help of machine learning algorithms, you can “ separate ” the vocals from the accompaniment, create musical transcriptions or mix tracks.



    By the way, if you like to read about sound in microformat - our Telegram channel :

    Amazing sounds of nature
    How to hear the color of
    Water Songs

    and narratives in our blog on Yandex.Dzen:

    4 famous people who were interested in music
    11 interesting facts from the history of Marshall brand



    Also popular now: