regexp-stream-tokenizer

0.2.2 • Public • Published

regexp-stream-tokenizer

Version License Build Status Coverage Status Dependency Status

NPM

This is a simple regular expression based tokenizer for streams.

IMPORTANT: If you return null from your function, the stream will end there.

IMPORTANT: Only supports object mode streams.

 
var tokenizer = require("regexp-stream-tokenizer");
 
var words = tokenizer(/w+/g);
 
// Sink receives tokens: 'The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog'
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)
 
// Separators are excluded by default, but can be included
var wordsAndSeparators = tokenizer({ separator: true }, /w+/g);
 
// Sink receives tokens: 'The', ' ', 'quick', ' ', 'brown', ' ', 'fox', ' ', 'jumps', ' ', 'over', ...
words.write('The quick brown fox jumps over the lazy dog');
words.pipe(sink)
 

API

require("regexp-stream-tokenizer")([options,] regexp)

Create a stream.Transform instance with objectMode: true that will tokenize the input stream using the regexp.

var Tx = require("regexp-stream-tokenizer").ctor([options,] regexp)

Create a reusable stream.Transform TYPE that can be called via new Tx or Tx() to create an instance.

Arguments

  • options
    • excludeZBS (boolean): defaults true.
    • token (boolean|string|function): defaults true.
    • separator (boolean|string|function): defaults false.
    • leaveBehind (string|Array): optionally provides pseudo-lookbehind support.
    • all other through2 options.
  • regexp (RegExp): The regular expression using which the stream will be tokenized.

Package Sidebar

Install

npm i regexp-stream-tokenizer

Weekly Downloads

723

Version

0.2.2

License

MIT

Last publish

Collaborators

  • jramsay