Mixed English and Chinese Tokenizer
Tokenizes sentences containing a mix of Chinese and English words.
Optionally, lemmatizes English words. English contractions such as "don't" are always changed to full forms such as "do not".
Supports Cantonese / Taiwanese / Mandarin. Defaults to producing output in
Traditional Chinese (see options
below).
Important
In order for this to work, you need the CEDICT dictionary. Search for "cedict_ts.u8", download the file and place it into the root folder of your Node.js application.
Example
const MEACT = ; let m = ; async console; m = null; // free up memory occupied by CEDICT;
Methods
async tokenize(text)
Returns an array of tokens from string. Punctuation is excluded.
let m = ; let tokens = await m; // ['I', 'am', 'here'];
async lemmatize(text)
The same as tokenize()
, except that English words are converted into lemmas.
For example, 'doing' will be changed to 'do'.
let m = ; let tokens = await m; // ['I', 'be', 'here'];
Options
The constructor takes an optional options
object.
options.simplified
- Boolean. Whether to use Simplified Chinese. Default isfalse
and all output is forced to Traditional Chinese, even if it is in Simplified Chinese.options.lemmaCache
- Object. An object storing the cache of all English lemmas. Lemmatizing an English word is computationally expensive (50-400 ms). For that reason, you may want to use an object that will cache the lemmas. You could store that object globally or save it to disk. The code would look roughly like this:
let lemmaCache = ; // assume you stored it somewhere const m = lemmaCache ;let lemmatized = m; // lemmaCache has been updated with new lemmas, save it to disk. Next time the// same sentence will be lemmatized much faster.;