A data structure for fast Unicode character metadata lookup, ported from ICU
When implementing many Unicode algorithms such as text segmentation, normalization, bidi processing, etc., fast access to character metadata is crucial to good performance. There over a million code points in the Unicode standard, many of which produce the same result when looked up, so an array or hash table is not appropriate - those data structures are fast but would require a lot of memory. The data is generally grouped in ranges, so you could do a binary search, but that is not fast enough for some applications.
The International Components for Unicode (ICU) project came up with a data structure based on a Trie that provides fast access to Unicode metadata. The range data is precompiled to a serialized and flattened trie, which is then used at runtime to lookup the necessary data. According to my own tests, this is generally at least 50% faster than binary search, with not too much additional memory required.
npm install unicode-trie
Building a Trie
Unicode Tries are generally precompiled from data in the Unicode database
for faster runtime performance. To build a Unicode Trie, use the
const UnicodeTrieBuilder = ;// create a trielet t = ;// optional parameters for default value, and error value// if not provided, both are set to 0t = 10 999;// set individual values and rangest;t;// you can lookup a value if you liket; // => 99// get a compiled trie (returns a UnicodeTrie object)const trie = t;// write compressed trie to a binary filefs;
Using a precompiled Trie
Once you've built a precompiled trie, you can load it into the
UnicodeTrie class, which is a readonly representation of the
trie. From there, you can lookup values.
const UnicodeTrie = ;const fs = ;// load serialized trie from binary fileconst data = fs;const trie = data;// lookup a valuetrie; // => 99