linear-algebra
NOTE: If you're serious about doing machine learning in the browser I recommend using deeplearn.js
Efficient, high-performance linear algebra library for node.js and browsers.
This is a low-level algebra library which supports basic vector and matrix operations, and has been designed with machine learning algorithms in mind.
Features:
- Simple, expressive, chainable API.
- Array implementation with performance optimizations.
- Enhanced floating point precision if needed.
- Comprehensive unit tests.
- Works in node.js and browsers.
- Small: ~1 KB minified and gzipped.
Installation
CommonJS
Install using npm:
$ npm install linear-algebra
Browser
Include dist/linear-algebra.js
script into your HTML.
In the browser the library is exposed via the linearAlgebra()
function.
How to use
Since linear algebra calculations tend to be CPU-intensive it is highly recommended that you run them within a separate thread or process. For browsers this means using a web worker. For node.js there are plenty of similar solutions available.
Initialisation
The examples below assume you are running in node.js. The library needs to be initialised once loaded:
var linearAlgebra = // initialise it Vector = linearAlgebraVector Matrix = linearAlgebraMatrix;
Note that both matrices and vectors are represented by Matrix
instances. The Vector
object simply contains helpers to create single-row Matrix
objects.
In-place methods
Matrix operations which result in a new matrix are implemented as two methods - a default method which returns a new Matrix
instance and an in-place method which causes the original to be overwritten. In some cases you may obtain better performance if you switch to the in-place version, and vice versa.
The in-place version of a method is named as the original method but with an additional _
suffix:
var m = 1 2 3 4 5 6 ; // defaultvar m2 = m; // multiply every element by 5m2 === m; // false // in-placevar m2 = m; // notice the _ suffixm2 === m; // true
Using the in-place version of a method may not always yield a performance improvement. You can run the performance benchmarks to see examples of this.
API
var m m2 m3; // variables we'll use below /* Construction */ m = 1 2 3 4 5 6 ;console; // 2console; // 3console; // [ [1, 2, 3], [4, 5, 6] ] // identity matrixm = Matrix;console; // [ [1,0,0], [0,1,0], [0,0,1] ] // scalar (diagonal) matrixm = Matrix;console; // [ [9,0,0], [0,9,0], [0,0,9] ] // zerosm = Matrix;console; // [ [0, 0], [0, 0], [0, 0] ] // reshape from arraym = Matrix;console; // [ [1, 2, 3,], [4, 5, 6] ] // vector (a 1-row matrix)m = Vector;console; // [ [0, 0, 0, 0, 0] ] /* Algebra */ // transposem = 1 2 3 4 5 6 ;m2 = m;console; // [ [1, 4], [2, 5], [3, 6] ] // dot-productm = 1 2 3 4 5 6 ;m2 = 1 2 3 4 5 6 ;m3 = m;console; // [ [22, 28], [49, 64] ] // multiply corresponding elementsm = 10 20 30 40 50 60 ;m2 = 1 2 3 4 5 6 ;m3 = m;console; // [ [10, 40], [90, 160], [250, 360] ] // divide corresponding elementsm = 10 20 30 40 50 60 ;m2 = 1 2 3 4 5 6 ;m3 = m;console; // [ [10, 10], [10, 10], [10, 10] ] // add corresponding elementsm = 10 20 30 40 50 60 ;m2 = 1 2 3 4 5 6 ;m3 = m;console; // [ [11, 22], [33, 44], [55, 66] ] // subtract corresponding elementsm = 10 20 30 40 50 60 ;m2 = 1 2 3 4 5 6 ;m3 = m;console; // [ [9, 18], [27, 36], [45, 54] ] /* Math functions */ // natural log (Math.log)m = 1 2 3 4 5 6 ;m2 = m;console; // [ [0.0000, 0.69315], [1.09861, 1.38629], [1.60944 1.79176] ] // sigmoidm = 1 2 3 4 5 6 ;m2 = m;console; // [ [0.73106, 0.88080], [0.95257, 0.98201], [0.99331, 0.99753] ] // add value to each elementm = 1 2 3 4 5 6 ;m2 = m;console; // [ [6, 7], [8, 9], [10, 11] ] // multiply each element by valuem = 1 2 3 4 5 6 ;m2 = m;console; // [ [5, 10], [15, 20], [25, 30] ] // any functionm = 1 2 3 4 5 6 ;m2 = m;console; // [ [0, 1], [2, 3], [4, 5] ] // any function with row and column passed-inm = 1 2 3 4 5 6 ;m2 = m;console; // [ [1, 1], [7, 9], [11, 13] ] /* Calculations */ // sum all elementsm = 1 2 3 4 5 6 ;console; // 21 /* Other methods */ // cloningm = 1 2 3 4 5 6 ;m2 = m;console; // [ [1, 2], [3, 4], [5, 6] ] // to plain arraym = 1 2 3 4 5 6 ;m2 = m;console; // [ [1, 2], [3, 4], [5, 6] ]
Higher precision
When adding floating point numbers together the end result is sometimes off by a minor decimal point (to see this try 0.1 + 0.2
in your JS console).
This module allows you to supply a custom adder (e.g. add
) as an option to the initialization call.
In node.js:
// we pass the 'add' function in as a parameter...var linAlg = add: Vector = linAlgVector Matrix = linAlgMatrix;
In the browser you will need to load in the higher-precision version of the library to be able to do this:
Note: If you use the higher-precision version of the library with a custom adder then expect performance to drop significantly for some matrix operations.
Performance
Performance vs. similar modules:
[17:23:14] Running suite vs. other modules [/Users/home/dev/js/linear-algebra/benchmark/vs-other-modules.perf.js]...[17:23:20] Matrix dot-product - linear-algebra x 288 ops/sec ±1.21% [17:23:25] Matrix dot-product - sylvester x 56.77 ops/sec ±4.51% [17:23:25] Fastest test is Matrix dot-product - linear-algebra at 5.1x faster than Matrix dot-product - sylvester
To run the performance benchmarks:
$ npm install -g gulp$ npm install$ gulp benchmark
Matrix operations which result in a new matrix are implemented as two methods - a default method which returns a new Matrix
instance and an in-place method which causes the original to be overwritten.
The in-place versions are provided because - general speaking- memory allocations and garbage collection are expensive operations you don't want happening when you're performing lots of calculations. Overwriting an existing array is twice as fast as creating a new one. And since changing the size of an array is also an expensive operation, even if a matrix operation results in a smaller matrix than before the internal array is kept at the same size:
var m = 1 2 3 4 5 6 ;var m2 = 7 8 9 ; m; console; // [ [43, 2, 3], [112, 5, 6] ]console; // 2console; // 1
The in-place versions attempt to limit memory allocations as much as possible and therefore ought to be faster. However, this may not be true for all the matrix operations contained in this library.
If you're dealing with large matrices (>100 rows, columns) then you're more likely to see a benefit from using the in-place versions of methods:
[14:38:35] Running suite Default vs in-place modification [/Users/home/dev/js/linear-algebra/benchmark/default-vs-in-place.perf.js]...[14:38:41] Matrix dot-product - default x 1,114,666 ops/sec ±0.94% [14:38:46] Matrix dot-product - in-place x 721,296 ops/sec ±2.95% [14:38:52] Matrix dot-product - default x 269 ops/sec ±3.75% [14:38:57] Matrix dot-product - in-place x 283 ops/sec ±0.94% [14:39:09] Matrix dot-product - default x 1.40 ops/sec ±9.96% [14:39:20] Matrix dot-product - in-place x 1.45 ops/sec ±4.30% [14:39:26] Matrix transpose - default x 13,770 ops/sec ±3.00% [14:39:31] Matrix transpose - in-place x 9,736 ops/sec ±2.44% [14:39:37] Multiple matrix operations - default x 218 ops/sec ±2.57% [14:39:42] Multiple matrix operations - in-place x 222 ops/sec ±0.71%
Building
To build the code and run the tests:
$ npm install -g gulp
$ npm install
$ gulp
Contributing
Contributions are welcome! Please see CONTRIBUTING.md.
License
MIT - see LICENSE.md