Complete documentation is available at https://tensorflow.github.io/magenta-js/music.
For the Python TensorFlow implementations, see the main Magenta repo.
Here are a few applications built with
- Piano Scribe by Monica Dinculescu and Adam Roberts
- Beat Blender by Google Creative Lab
- Melody Mixer by Google Creative Lab
- Latent Loops by Google Pie Shop
- Neural Drum Machine by Tero Parviainen
- Tenori-Off by Monica Dinculescu
We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!
Piano Transcription w/ Onsets and Frames
OnsetsAndFrames implements Magenta's piano transcription model for converting raw audio to MIDI in the browser. While it is somewhat flexible, it works best on solo piano recordings. The algorithm takes half the duration of audio to run on most browsers, but due to a Webkit bug, audio resampling will make this it significantly slower on Safari.
Demo Application: Piano Scribe
Demo Application: Neural Drum Machine
MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, chord-conditioned multi-track models, and drum performance "humanizations" with [GrooVAE][https://g.co/magenta/groovae].
Demo Application: Endless Trios
Piano Genie is a VQ-VAE model that that maps 8-button input to a full 88-key piano in real time.
Demo Application: Piano Genie
There are several ways to get
either in the browser, or in Node:
This has all the models and all the core library helpers all bundled into one file. This is the simplest way to use Magenta.js.
To use this bundle, add the following code to an HTML file:
<!-- Load @magenta/music -->Play Trio
Open up that html file in your browser (or click here for a hosted version) and the code will run. Click the "Play Trio" button to hear 4-bar trios that are randomly generated by MusicVAE.
It's also easy to add the ability to download MIDI for generated outputs, which is demonstrated in this example.
See our demos for example usage.
Using a smaller ES6 bundle for just the code you need
We have also split all the models and the core library into smaller ES6 bundles (not ESModules, unfortunately 😢), so that you can use a model independent of the rest of the
library. These bundles don't package
there would be a risk of downloading multiple copies on the same page). Here is an example:
...<!-- You need to bring your own Tone.js for the player, and tfjs for the model --><!-- Core library, since we're going to use a player --><!--Model we want to use -->
The node-specific bundles (that don't transpile the CommonJS modules) are under
@magenta/music/node. For example:
const model = ;const core = ;// These hacks below are needed because the library uses performance and fetch which// exist in browsers but not in node. We are working on simplifying this!const globalAny: any = global;globalAnyperformance = Date;globalAnyfetch = ;// Your code:const model = '/path/to/checkpoint';const player = ;model;
yarn install to install dependencies.
yarn test to run tests.
yarn build to produce the different bundled versions.
yarn run-demos to build and serve the demos, with live reload.
(Note: the default behavior is to build/watch all demos - specific demos can be built by passing a comma-separated list of specific demo names as follows:
yarn run-demos --demos=transcription,visualizer)
Since MagentaMusic.js does not support training models, you must use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.
Several pre-trained MusicRNN and MusicVAE checkpoints are hosted on GCS. The full list can is available in this table and can be accessed programmatically via a JSON index at https://goo.gl/magenta/js-checkpoints-json.
More information is available at https://goo.gl/magenta/js-checkpoints.
Your Own Checkpoints
Dumping Your Weights
To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.
This tool is dependent on tfjs-converter, which you must first install using
pip install tensorflowjs. Once installed, you can execute the script as follows:
../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir
There are additional flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call
../scripts/checkpoint_converter.py -h to list the available options.
Specifying the Model Configuration
The model configuration should be placed in a JSON file named
config.json in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example
config.json file might look like:
This configuration corresponds to a chord-conditioned melody MusicRNN model.
There are several SoundFonts that you can use with the
for more realistic sounding instruments:
|Piano||salamander||Audio samples from Salamander Grand Piano|
|Multi||sgm_plus||Audio samples based on SGM with modifications by John Nebauer|
|Percussion||jazz_kit||Audio samples from Jazz Kit (EXS) by Lithalean|
You can explore what each of them sounds like on this demo page.