Nourishing Pizza Microservice
    Have ideas to improve npm?Join in the discussion! »


    1.0.22 • Public • Published

    npm version GitHub stars GitHub file size in bytes GitHub file size in bytes npm bundle size (version) npm downloads CC0 license

    This Javascript library provides the most performant tiny polyfill for window.TextEncoder, TextEncoder.prototype.encodeInto, and window.TextDecoder for use in the browser, in NodeJS, in RequireJS, in web Workers, in SharedWorkers, and in ServiceWorkers.

    Quick Start

    Add the following HTML Code inside the <head>:

    <script src="" nomodule="" type="text/javascript"></script>

    If no script on the page requires this library until the DOMContentLoaded event, then use the the much less blocking version below:

    <script defer="" src="" nomodule="" type="text/javascript"></script>

    Alternatively, either use to polyfill window.TextEncoder for converting a String into a Uint8Array or use to only polyfill window.TextDecoder for converting a Uint8Array/ArrayBuffer/[typedarray]/global.Buffer into a String.

    The nomodule attribute prevents the script from being needlessly downloaded and executed on browsers which already support TextEncoder and TextDecoder. nomodule does not test for the presence of TextEncoder or TextDecoder, but it is very safe to assume that browsers advanced enough to support modules also support TextEncoder and TextDecoder.


    See the MDN here for documentation. For the TextEncoder.prototype.encodeInto polyfill, please use for the full package, for only TextEncoder and TextEncoder.prototype.encodeInto, and npm i fastestsmallesttextencoderdecoder-encodeinto for NodeJS, es6 modules, RequireJS, AngularJS, or whatever it is that floats your boat. The encodeInto folder of this repository contains the auto-generated encodeInto build of the main project. The npm project is fastestsmallesttextencoderdecoder-encodeinto:

    npm install fastestsmallesttextencoderdecoder-encodeinto

    RequireJS and NodeJS

    For dropping into either RequireJS or NodeJS, please use the fastestsmallesttextencoderdecoder npm repository, this minified file, or the corresponding source code file. To install via npm, use the following code.

    npm install fastestsmallesttextencoderdecoder

    Alternatively, if one do not know how to use the command line, save the script corresponding to one's operating system to the directory where the nodejs script will run and use the file manager to run the script (on Windows, it's a double-click).

    After installing via npm, one can use require("fastestsmallesttextencoderdecoder"). Alternatively, one can drop the EncoderAndDecoderNodeJS.min.js file into the same directory as their NodeJS script and do require("./EncoderAndDecoderNodeJS.min.js"). Both methods are functionally equivalent.


    Open a terminal in the project's directory, and install fastestsmallesttextencoderdecoder via npm.

    npm install fastestsmallesttextencoderdecoder

    Then, add import 'fastestsmallesttextencoderdecoder'; to the polyfills.ts file.


    Don't take my word that FastestSmallestTextEncoderDecoder is the fastest. Instead, check out the benchmarks below. You can run your own benchmarks by cloning this repo and running npm run benchmark, but beware that you need a beefy computer with plenty of free RAM, as the NodeJS garbage collector is disabled via --noconcurrent_sweeping --nouse-idle-notification so that it does not interfer with the timing of the tests (the GC is runned manually via global.gc(true) at the conclusion of the tests).

    The tests below were performed on an ascii file. To ensure consistancy, all test results are the mean of the IQR of many many trials. The checkmark "✔" means that the encoder/decoder implementation gave the correct output, whereas a bold "" indicates an incorrect output. This extra check is signifigant because relying on a faulty encoder/decoder can lead to inconsistant behaviors in code that defaults to using the native implementation where available.

    Library Decode 32 bytes Decode 32768 Decode 16777216 Encode 32 bytes Encode 32768 Encode 16777216
    Native 10201 KB/sec ✔ 806451 KB/sec ✔ 907381 KB/sec ✔ 53415 KB/sec ✔ 4661211 KB/sec ✔ 1150916 KB/sec ✔
    FastestSmallestTextEncoderDecoder 18038 KB/sec ✔ 154839 KB/sec ✔ 168984 KB/sec ✔ 21667 KB/sec ✔ 404279 KB/sec ✔ 681429 KB/sec ✔
    fast-text-encoding 17518 KB/sec ✔ 71806 KB/sec ✔ 99017 KB/sec ✔ 22713 KB/sec ✔ 240880 KB/sec ✔ 445137 KB/sec ✔
    text-encoding-shim 10205 KB/sec ✔ 17503 KB/sec ✔ 27971 KB/sec ✔ 14044 KB/sec ✔ 50007 KB/sec ✔ 88687 KB/sec ✔
    TextEncoderLite 12433 KB/sec ✔ 23456 KB/sec ✔ 13929 KB/sec ✔ 24013 KB/sec ✔ 57034 KB/sec ✔ 62119 KB/sec ✔
    TextEncoderTextDecoder.js 4469 KB/sec ✔ 5956 KB/sec ✔ 5626 KB/sec ✔ 13576 KB/sec ✔ 37667 KB/sec ✔ 57916 KB/sec ✔
    text-encoding 3084 KB/sec ✔ 6762 KB/sec ✔ 7925 KB/sec ✔ 8621 KB/sec ✔ 26699 KB/sec ✔ 35755 KB/sec ✔

    Needless to say, FastestSmallestTextEncoderDecoder outperformed every other polyfill out there. Infact, it is so fast that it outperformed the native implementation on a set of 32 ascii bytes. The tests below were performed on a mixed ascii-utf8 file.

    Library Decode 32 bytes Decode 32768 Decode 16777216 Encode 32 bytes Encode 32768 Encode 16777216
    Native 24140 KB/sec ✔ 365043 KB/sec ✔ 512133 KB/sec ✔ 54183 KB/sec ✔ 293455 KB/sec ✔ 535203 KB/sec ✔
    FastestSmallestTextEncoderDecoder 13932 KB/sec ✔ 113823 KB/sec ✔ 141706 KB/sec ✔ 20755 KB/sec ✔ 212100 KB/sec ✔ 443344 KB/sec ✔
    fast-text-encoding 10738 KB/sec ✔ 62851 KB/sec ✔ 94031 KB/sec ✔ 15105 KB/sec ✔ 104843 KB/sec ✔ 320778 KB/sec ✔
    TextEncoderLite 6594 KB/sec ✔ 9893 KB/sec ✔ 10470 KB/sec ✔ 17660 KB/sec 53905 KB/sec 57862 KB/sec
    text-encoding-shim 10778 KB/sec ✔ 15063 KB/sec ✔ 24373 KB/sec ✔ 27296 KB/sec ✔ 31496 KB/sec ✔ 42497 KB/sec ✔
    TextEncoderTextDecoder.js 5558 KB/sec ✔ 5121 KB/sec ✔ 6580 KB/sec ✔ 14583 KB/sec ✔ 32261 KB/sec ✔ 60183 KB/sec ✔
    text-encoding 3531 KB/sec ✔ 6669 KB/sec ✔ 7983 KB/sec ✔ 7233 KB/sec ✔ 20343 KB/sec ✔ 29136 KB/sec ✔

    FastestSmallestTextEncoderDecoder excells at encoding lots of complex unicode and runs at 83% the speed of the native implementation. In the next test, let's examine a more real world example—the 1876 The Russian Synodal Bible.txt. It's a whoping 4.4MB rat's-nest of complex Russian UTF-8, sure to give any encoder/decoder a bad day. Let's see how they perform at their worst.

    Library Decode Russian Bible Encode Russian Bible
    Native 626273 KB/sec ✔ 951538 KB/sec ✔
    FastestSmallestTextEncoderDecoder 228360 KB/sec ✔ 428625 KB/sec ✔
    fast-text-encoding 94666 KB/sec ✔ 289109 KB/sec ✔
    text-encoding-shim 29335 KB/sec ✔ 60508 KB/sec ✔
    TextEncoderLite 14079 KB/sec ✔ 61648 KB/sec ✔
    TextEncoderTextDecoder.js 5989 KB/sec ✔ 54741 KB/sec ✔
    text-encoding 7919 KB/sec ✔ 28043 KB/sec ✔

    Browser Support

    This polyfill will bring support for TextEncoder/TextDecoder to the following browsers.

    Feature Chrome Firefox Opera Edge Internet Explorer Safari Android Samsung Internet Node.js
    Full Polyfill 7.0 4.0 11.6 12.0** 10 5.1 (Desktop) / 4.2 (iOS) 4.0 1.0 3.0
    Partial Polyfill* 1.0** 0.6 7.0 (Desktop) / 9.5** (Mobile) 12.0** 4.0 2.0 1.0** 1.0** 0.10

    Also note that while this polyfill may work in these old browsers, it is very likely that the rest of one's website will not work unless if one makes a concious effort to have their code work in these old browsers.

    * Partial polyfill means that Array (or Buffer in NodeJS) will be used instead of Uint8Array/[typedarray].

    ** This is the first public release of the browser

    API Documentation

    Please review the MDN at window.TextEncoder and window.TextDecoder for information on how to use TextEncoder and TextDecoder.

    As for NodeJS, calling require("EncoderAndDecoderNodeJS.min.js") yields the following object. Note that this polyfill checks for global.TextEncoder and global.TextDecoder and returns the native implementation if available.

    module.exports = {
        TextEncoder: function TextEncoder(){/*...*/},
        TextDecoder: function TextDecoder(){/*...*/},
        encode: TextEncoder.prototype.encode,
        decode: TextDecoder.prototype.decode

    In NodeJS, one does not ever have to use new just to get the encoder/decoder (although one still can do so if they want to). All of the code snippets below function identically (aside from unused local variables introduced into the scope).

        // Variation 1
        const {TextEncoder, TextDecoder} = require("fastestsmallesttextencoderdecoder");
        const encode = (new TextEncoder).encode;
        const decode = (new TextDecoder).decode;
        // Variation 2
        const {encode, decode} = require("fastestsmallesttextencoderdecoder");
        // Variation 3 (a rewording of Variation 2)
        const encodeAndDecodeModule = require("fastestsmallesttextencoderdecoder");
        const encode = encodeAndDecodeModule.encode;
        const decode = encodeAndDecodeModule.decode;

    Or, one can use the new and shiny ES6 module importation statements.

        // Variation 1
        import {TextEncoder, TextDecoder} from "fastestsmallesttextencoderdecoder";
        const encode = (new TextEncoder).encode;
        const decode = (new TextDecoder).decode;
        // Variation 2
        import {encode, decode} from "fastestsmallesttextencoderdecoder";
        // Variation 3 (a rewording of Variation 2)
        import * as encodeAndDecodeModule from "fastestsmallesttextencoderdecoder";
        const encode = encodeAndDecodeModule.encode;
        const decode = encodeAndDecodeModule.decode;


    Visit the GithubPage to see a demonstation. As seen in the Web Worker hexWorker.js, the Github Pages demonstration uses a special encoderAndDecoderForced.src.js version of this library to forcefully install the TextEncoder and TextDecoder even when there is native support. That way, this demonstraton should serve to truthfully demonstrate this polyfill.

    npm Project

    This project can be found on npm here at this link.


    On Linux, the project can be developed by cloning it with the following command line. The development scripts are designed to be interpeted by Dash, and whether they work on Mac OS is unknown, but they certainly won't work on Windows.

    git clone; cd FastestSmallestTextEncoderDecoder; npm run install-dev

    Emphasize the npm run install-dev, which downloads closure-compiler.jar into the repository for minifying the files.

    Now that the repository is cloned, edit the files as one see fit. Do not edit the files in the encodeInto folder. Those are all auto-generated by having Closure Compiler set ENCODEINTO_BUILD to true and removing dead code for compactness. Also, do not run npm run build in the encodeInto. That's done automatically when npm run build is runned in the topmost folder. Now that the files have been edited, run the following in the terminal in the root folder of the repository in order to minify the NodeJS JavaScript files.

    npm run build

    To edit tests, edit test/node.js. These tests are compared against the native implementation to ensure validity. To run tests, do the following.

    npm run test


    Feel free to reach out to me at I am fairly attentive to my github account, but in the unlikely event that issues/pulls start piling up, I of course welcome others to step in and contribute. I am widely open to input and collaboration from anyone on all of my projects.


    npm i fastestsmallesttextencoderdecoder

    DownloadsWeekly Downloads






    Unpacked Size

    43.4 kB

    Total Files


    Last publish


    • avatar