@yozora/tokenizer-paragraph
@yozora/tokenizer-paragraph produce Paragraph type nodes. See documentation for details.
Install
-
npm
npm install --save @yozora/tokenizer-paragraph
-
yarn
yarn add @yozora/tokenizer-paragraph
Usage
@yozora/tokenizer-paragraph has been integrated into @yozora/parser /
@yozora/parser-gfm-ex / @yozora/parser-gfm, so you can use YozoraParser
/ GfmExParser
/
GfmParser
directly.
Basic Usage
@yozora/tokenizer-paragraph cannot be used alone, it needs to be registered in YastParser as a plugin-in before it can be used.
import { DefaultParser } from '@yozora/core-parser'
import ParagraphTokenizer from '@yozora/tokenizer-paragraph'
import TextTokenizer from '@yozora/tokenizer-text'
import ParagraphTokenizer from '@yozora/tokenizer-paragraph'
const parser = new DefaultParser()
.useFallbackTokenizer(new ParagraphTokenizer())
.useFallbackTokenizer(new TextTokenizer())
.useTokenizer(new ParagraphTokenizer())
// parse source markdown content
parser.parse(`
aaa
bbb
`)
@yozora/parser
Use withinimport YozoraParser from '@yozora/parser'
const parser = new YozoraParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)
@yozora/parser-gfm
Use withimport GfmParser from '@yozora/parser-gfm'
const parser = new GfmParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)
@yozora/parser-gfm-ex
Use withinimport GfmExParser from '@yozora/parser-gfm-ex'
const parser = new GfmExParser()
// parse source markdown content
parser.parse(`
aaa
bbb
`)
Options
Name | Type | Required | Default |
---|---|---|---|
name |
string |
false |
"@yozora/tokenizer-paragraph" |
priority |
number |
false |
TokenizerPriority.FALLBACK |
-
name
: The unique name of the tokenizer, used to bind the token it generates, to determine the tokenizer that should be called in each life cycle of the token in the entire matching / parsing phase. -
priority
: Priority of the tokenizer, determine the order of processing, high priority priority execution. interruptable. In addition, in thematch-block
stage, a high-priority tokenizer can interrupt the matching process of a low-priority tokenizer.