This library implements the web audio API specification on node.js.
And this is not even alpha. Use this library only if you're the adventurous kind.
npm install web-audio-api
Get ready, this is going to blow up your mind :
npm installgulp defaultnode test/manual-testing/AudioContext-sound-output.js
By default, node-web-audio-api doesn't play back the sound it generates. In fact, an
AudioContext has no default output, and you need to give it a writable node stream to which it can write raw PCM audio. After creating an
AudioContext, set its output stream like this :
audioContext.outStream = writableStream.
This is probably the simplest way to play back audio. Install node-speaker with
npm install speaker, then do something like this :
var AudioContext = AudioContextcontext =Speaker =contextoutStream =channels: contextformatnumberOfChannelsbitDepth: contextformatbitDepthsampleRate: contextsampleRate// Create some audio nodes here to make some noise ...
Linux users can play back sound from node-web-audio-api by piping its output to aplay. For this, simply send the generated sound straight to
stdout like this :
var AudioContext = AudioContextcontext =contextoutStream = processstdout// Create some audio nodes here to make some noise ...
Then start your script, piping it to aplay like so :
node myScript.js | aplay -f cd
icecast is a open-source streaming server. It works great, and is very easy to setup. icecast accepts connections from different source clients which provide the sound to encode and stream. ices is a client for icecast which accepts raw PCM audio from its standard input, and you can send sound from node-web-audio-api to ices (which will send it to icecast) by simply doing :
var spawn = require('child_process').spawn, AudioContext = require('web-audio-api').AudioContext, context = new AudioContext()var ices = spawn('ices', ['ices.xml'])context.outStream = ices.stdin
A live example is available on Sébastien's website
Gibber is a great audiovisual live coding environment for the browser made by Charlie Roberts. For audio, it uses Web Audio API, so you can run it on node-web-audio-api. First install gibber with npm :
npm install gibber.audio.lib
Then to you can run the following test to see that everything works:
npm test gibber.audio.lib
Each time you create an
AudioNode (like for instance an
AudioBufferSourceNode or a
GainNode), it inherits from
DspObject which is in charge of two things:
Each time you connect an
source.connect(destination, output, input) it connects the relevant
AudioOutput instances of
source node the the relevant
AudioInput instance of the
To instantiate all of these
AudioNode, you needed an overall
AudioContext instance. This latter has a
destination property (where the sound will flow out), instance of
AudioDestinationNode, which inherits from
AudioContext instance keeps track of connections to the
destination. When that happens, it triggers the audio loop, calling
_tick infinitely on the
destination, which will itself call
_tick on its input ... and so forth go up on the whole audio graph.
Right now everything runs in one process, so if you set a break point in your code, there's going to be a lot of buffer underflows, and you won't be able to debug anything.
One trick is to kill the
AudioContext right before the break point, like this:
that way the audio loop is stopped, and you can inspect your objects in peace.
Tests are written with mocha. To run them, install mocha with :
npm install -g mocha
And in the root folder run :
To test the sound output, we need to install
node-speaker (in addition of all the other dependencies), and build the library :
npm installnpm install speakergulp defaultnode test/manual-testing/AudioContext-sound-output.js
AudioParam implemented in a browser, open
test/manual-testing/AudioParam-browser-plots.html in that browser.
61 Sébastien Piquemal16 ouhouhsami4 John Wnek2 anprogrammer1 Andrew Petersen
decodeAudioData, support only for wav
starthave no effect
AudioContext : method
audioports : bug fixes