@yozora/tokenizer-image
@yozora/tokenizer-image produce Image type nodes. See documentation for details.
Install
-
npm
npm install --save @yozora/tokenizer-image
-
yarn
yarn add @yozora/tokenizer-image
Usage
@yozora/tokenizer-image has been integrated into @yozora/parser / @yozora/parser-gfm-ex
/ @yozora/parser-gfm, so you can use YozoraParser
/ GfmExParser
/ GfmParser
directly.
Basic Usage
@yozora/tokenizer-image cannot be used alone, it needs to be registered in YastParser as a plugin-in before it can be used.
import { DefaultParser } from '@yozora/core-parser'
import ParagraphTokenizer from '@yozora/tokenizer-paragraph'
import TextTokenizer from '@yozora/tokenizer-text'
import ImageTokenizer from '@yozora/tokenizer-image'
const parser = new DefaultParser()
.useFallbackTokenizer(new ParagraphTokenizer())
.useFallbackTokenizer(new TextTokenizer())
.useTokenizer(new ImageTokenizer())
// parse source markdown content
parser.parse(`
![foo](/url "title")
`)
@yozora/parser
Use withinimport YozoraParser from '@yozora/parser'
const parser = new YozoraParser()
// parse source markdown content
parser.parse(`
![foo](/url "title")
`)
@yozora/parser-gfm
Use withimport GfmParser from '@yozora/parser-gfm'
const parser = new GfmParser()
// parse source markdown content
parser.parse(`
![foo](/url "title")
`)
@yozora/parser-gfm-ex
Use withinimport GfmExParser from '@yozora/parser-gfm-ex'
const parser = new GfmExParser()
// parse source markdown content
parser.parse(`
![foo](/url "title")
`)
Options
Name | Type | Required | Default |
---|---|---|---|
name |
string |
false |
"@yozora/tokenizer-image" |
priority |
number |
false |
TokenizerPriority.LINKS |
-
name
: The unique name of the tokenizer, used to bind the token it generates, to determine the tokenizer that should be called in each life cycle of the token in the entire matching / parsing phase. -
priority
: Priority of the tokenizer, determine the order of processing, high priority priority execution. interruptable. In addition, in thematch-block
stage, a high-priority tokenizer can interrupt the matching process of a low-priority tokenizer.Exception: Delimiters of type
full
are always processed before other type delimiters.