string-tokenizer
Break your string into tokens in node
and browser
without headache.
Why
- missing named groups in
RegExp
in js - to make code cleaner and easier to follow, parsing is not always an elegant thing
RegExp.source
concatenation(/foo/)|(/baz/)|..
is not too flexibleJSON
as token provider out of the box- to keep
RegExps
simple - to provide easy way to assest whether something should be a token or not based upon external data
Usage
- in
node
or any othercommon-js-browser-bundler
npm install --save string-tokenizer
var tokenizer =
- In browser
var tokenizer = dbtokenizer //as a global //wrapped by UMD, available via AMD as string-tokenizer var tokens = input'#aa test' //result { tag: 'aa', input: 'test' }
API
- basic functionallity
//Defining INPUT
- skipping unwanted phrase during
walk
TODO
- defining and using
helpers
TODO
Examples
input'Test with #tag and a http://google.com/ #url' //result .resolve() symbol: 'Test' 'with' 'and' 'a' space: ' ' ' ' ' ' ' ' ' ' tag: '#tag' '#url' url: 'http://google.com/ ' _source: 'Test with #tag and a http://google.com/ #url' //with .resolve(true) symbol: value: 'Test' position: 0 value: 'with' position: 5 value: 'and' position: 15 value: 'a' position: 19 space: value: ' ' position: 4 value: ' ' position: 9 value: ' ' position: 14 value: ' ' position: 18 value: ' ' position: 20 tag: value: '#tag' position: 10 value: '#url' position: 40 url: value: 'http://google.com/ ' position: 21 _source: 'Test with #tag and a http://google.com/ #url'
run npm run example
or open example.html
file to play with and get better overview.