Run Keras models (trained using Tensorflow backend) in your browser, with GPU support. Models are created directly from the Keras JSON-format configuration file, using weights serialized directly from the corresponding HDF5 file. Also works in node, but only in CPU mode.
Currently the focus of this library is on forward-pass inference only.
Library version compatibility:
Basic Convnet for MNIST
Convolutional Variational Autoencoder, trained on MNIST
50-layer Residual Network, trained on ImageNet
Inception v3, trained on ImageNet
SqueezeNet v1.1, trained on ImageNet
Bidirectional LSTM for IMDB sentiment classification
demos/src/ for source code of real examples written in VueJS.
model = Sequential()model.add(...)...
...model = Model(input=..., output=...)
Once trained, save the weights and export model architecture config:
model.save_weights('model.hdf5')with open('model.json', 'w') as f:f.write(model.to_json())
See jupyter notebooks of demos for details:
demos/notebooks/. All that's required for ResNet50, for example, is:
from keras.applications import resnet50model = resnet50.ResNet50(include_top=True, weights='imagenet')model.save_weights('resnet50.hdf5')with open('resnet50.json', 'w') as f:f.write(model.to_json())
$ python encoder.py /path/to/model.hdf5
This will produce 2 files in the same folder as the HDF5 weights:
the model file:
the weights file:
the weights metadata file:
or in node (4+ required):
$ npm install keras-js --save# or$ yarn add keras-js
// namespacedconst KerasJS =// or// not namespacedconst Model = Model// or
On instantiation, data is loaded using XHR (same-domain or CORS required), and layers are initialized as a directed acyclic graph:
// in browser, URLs can be relative or absoluteconst model =filepaths:model: 'url/path/to/model.json'weights: 'url/path/to/model_weights.buf'metadata: 'url/path/to/model_metadata.json'gpu: true// in node, gpu flag will always be off// paths can be filesystem paths or absolute URLs// if filesystem path, this must be specified:const model =filepaths:model: 'path/to/model.json'weights: 'path/to/model_weights.buf'metadata: 'path/to/model_metadata.json'filesystem: true
ready() returns a Promise which resolves when these steps are complete. Then, use
predict() to run data through the model, which also returns a Promise:
Alternatively, we could also use async/await:
tryawait modelconst inputData ='input_1': dataconst outputData = await modelcatch err// handle error
core: Dense, Activation, Dropout, SpatialDropout1D, SpatialDropout2D, SpatialDropout3D, Flatten, Reshape, Permute, RepeatVector
convolutional: Conv1D, Conv2D, SeparableConv2D, Conv2DTranspose, Conv3D, Cropping1D, Cropping2D, Cropping3D, UpSampling1D, UpSampling2D, UpSampling3D, ZeroPadding1D, ZeroPadding2D, ZeroPadding3D
pooling: MaxPooling1D, MaxPooling2D, MaxPooling3D, AveragePooling1D, AveragePooling2D, AveragePooling3D, GlobalMaxPooling1D, GlobalMaxPooling2D, GlobalMaxPooling3D, GlobalAveragePooling1D, GlobalAveragePooling2D, GlobalAveragePooling3D
recurrent: SimpleRNN, LSTM, GRU
merge: Add, Multiply, Average, Maximum, Concatenate, Dot
advanced activations: LeakyReLU, PReLU, ELU, ThresholdedReLU
noise: GaussianNoise, GaussianDropout
wrappers: Bidirectional, TimeDistributed
legacy: Merge, MaxoutDense, Highway
local: LocallyConnected1D, LocallyConnected2D
WebWorkers and their limitations
Keras.js can be run in a WebWorker separate from the main thread. Because Keras.js performs a lot of synchronous computations, this can prevent the UI from being affected. However, one of the biggest limitations of WebWorkers is the lack of
<canvas> (and thus WebGL) access. So the benefits gained by running Keras.js in a separate thread are offset by the necessity of running it in CPU-mode only. In other words, one can run Keras.js in GPU mode only on the main thread. This will not be the case forever.
In GPU mode, tensor objects are encoded as WebGL textures prior to computations. The size of these tensors are limited by
gl.getParameter(gl.MAX_TEXTURE_SIZE), which differs by hardware/platform. See here for typical expected values. For operations involving tensors where this value is exceeded along any dimension, that operation falls back to the CPU.
Firefox on certain platforms (macOS in particular, possibly others) still has texture size limits hard-coded. Even on modern GPUs, this limit may be too low. This is a known issue. While Keras.js will gracefully downgrade to use the CPU in this case, computational performance will be degraded. One way to get around this is to go to
about:config and change
false, and restart the browser. This should increase the max texture size back to normal.
There are extensive tests for each implemented layer. See
notebooks/ for jupyter notebooks generating the data for all these tests.
$ npm install
To run all tests run
npm run server and simply go to http://localhost:3000/test/. All tests will automatically run. Open up your browser devtools for additional test data info.
For development, run:
$ npm run watch
Editing of any file in
src/ will trigger webpack to update
To create a production UMD webpack build, output to
$ npm run build:browser
Data files for the demos are located at
demos/data/. Due to its large size, this folder is ignored by git. Clone the keras-js-demos-data repo and copy the contents to