Mixed English and Chinese Tokenizer
Tokenizes string that contains Chinese and English words.
Optionally, lemmatizes English words. English contractions such as "don't" are always changed to full forms such as "do not".
Supports Cantonese / Taiwanese / Mandarin. Defaults to producing output in
Traditional Chinese (see
const MEACT = ;let m = ;asyncconsole;m = null; // free up memory occupied by CEDICT;
Returns an array of tokens from string. Punctuation is excluded.
let m = ;let tokens = await m; // ['I', 'am', 'here'];
The same as
tokenize(), except that English words are converted into lemmas.
For example, 'doing' will be changed to 'do'.
let m = ;let tokens = await m; // ['I', 'be', 'here'];
The constructor takes an optional
options.simplified- Boolean. Whether to use Simplified Chinese. Default is
falseand all output is forced to Traditional Chinese, even if it is in Simplified Chinese.
options.lemmaCache- Object. An object storing the cache of all English lemmas. Lemmatizing an English word is computationally expensive (50-400 ms). For that reason, you may want to use an object that will cache the lemmas. You could store that object globally or save it to disk. The code would look roughly like this:
let lemmaCache = ; // assume you stored it somewhereconst m = lemmaCache ;let lemmatized = m;// lemmaCache has been updated with new lemmas, save it to disk. Next time the// same sentence will be lemmatized much faster.;