@magenta/music1.8.0 • Public • Published
Complete documentation is available at https://tensorflow.github.io/magenta-js/music.
For the Python TensorFlow implementations, see the main Magenta repo.
Here are a few applications built with
- Piano Scribe by Monica Dinculescu and Adam Roberts
- Beat Blender by Google Creative Lab
- Melody Mixer by Google Creative Lab
- Latent Loops by Google Pie Shop
- Neural Drum Machine by Tero Parviainen
- Tenori-Off by Monica Dinculescu
We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!
Piano Transcription w/ Onsets and Frames
OnsetsAndFrames implements Magenta's piano transcription model for converting raw audio to MIDI in the browser. While it is somewhat flexible, it works best on solo piano recordings. The algorithm takes half the duration of audio to run on most browsers, but due to a Webkit bug, audio resampling will make this it significantly slower on Safari.
Demo Application: Piano Scribe
Demo Application: Neural Drum Machine
MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, chord-conditioned multi-track models, and drum performance "humanizations" with GrooVAE.
Demo Application: Endless Trios
Piano Genie is a VQ-VAE model that that maps 8-button input to a full 88-key piano in real time.
Demo Application: Piano Genie
via Script Tag
Add the following code to an HTML file:
<!-- Load @magenta/music -->Play Trio
Open up that html file in your browser (or click here for a hosted version) and the code will run. Click the "Play Trio" button to hear 4-bar trios that are randomly generated by MusicVAE.
It's also easy to add the ability to download MIDI for generated outputs, which is demonstrated in this example.
Then, you can use the library in your own code as in the following example:
;const model = '/path/to/checkpoint';const player = ;model;
See our demos for example usage.
yarn install to install dependencies.
yarn test to run tests.
yarn bundle to produce a bundled version in
yarn run-demos to build and serve the demos, with live reload.
(Note: the default behavior is to build/watch all demos - specific demos can be built by passing a comma-separated list of specific demo names as follows:
yarn run-demos --demos=transcription,visualizer)
Since MagentaMusic.js does not support training models, you must use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.
Several pre-trained MusicRNN and MusicVAE checkpoints are hosted on GCS. The full list can is available in this table and can be accessed programmatically via a JSON index at https://goo.gl/magenta/js-checkpoints-json.
More information is available at https://goo.gl/magenta/js-checkpoints.
Your Own Checkpoints
Dumping Your Weights
To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.
This tool is dependent on tfjs-converter, which you must first install using
pip install tensorflowjs. Once installed, you can execute the script as follows:
../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir
There are additonal flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call
../scripts/checkpoint_converter.py -h to list the avilable options.
Specifying the Model Configuration
The model configuration should be placed in a JSON file named
config.json in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example
config.json file might look like:
This configuration corresponds to a chord-conditioned melody MusicRNN model.