TypeScript icon, indicating that this package has built-in type declarations

    8.0.4 • Public • Published


    Build Coverage Downloads Size Sponsors Backers Chat

    Parser for unified. Parses Markdown to mdast syntax trees. Used in the remark processor but can be used on its own as well. Can be extended to change how Markdown is parsed.












    npm install remark-parse


    var unified = require('unified')
    var createStream = require('unified-stream')
    var markdown = require('remark-parse')
    var remark2rehype = require('remark-rehype')
    var html = require('rehype-stringify')
    var processor = unified()
      .use(markdown, {commonmark: true})

    See unified for more examples »



    See unified for API docs »

    processor().use(parse[, options])

    Configure the processor to read Markdown as input and process mdast syntax trees.


    Options can be passed directly, or passed later through


    GFM mode (boolean, default: true).

    hello ~~hi~~ world

    Turns on:


    CommonMark mode (boolean, default: false).

    This is a paragraph
        and this is also part of the preceding paragraph.


    • Empty lines to split block quotes
    • Parentheses (( and )) around link and image titles
    • Any escaped ASCII punctuation character
    • Closing parenthesis ()) as an ordered list marker
    • URL definitions in block quotes


    • Indented code blocks directly following a paragraph
    • ATX headings (# Hash headings) without spacing after opening hashes or and before closing hashes
    • Setext headings (Underline headings\n---) when following a paragraph
    • Newlines in link and image titles
    • White space in link and image URLs in auto-links (links in brackets, < and >)
    • Lazy block quote continuation, lines not preceded by a greater than character (>), for lists, code, and thematic breaks

    ⚠️ Pedantic was previously used to mimic old-style Markdown mode: no tables, no fenced code, and with many bugs. It’s currently still “working”, but please do not use it, it’ll be removed in the future.


    Blocks (Array.<string>, default: list of block HTML elements).


    Defines which HTML elements are seen as block level.


    Access to the parser, if you need it.

    Extending the Parser

    Typically, using transformers to manipulate a syntax tree produces the desired output. Sometimes, such as when introducing new syntactic entities with a certain precedence, interfacing with the parser is necessary.

    If the remark-parse plugin is used, it adds a Parser constructor function to the processor. Other plugins can add tokenizers to its prototype to change how Markdown is parsed.

    The below plugin adds a tokenizer for at-mentions.

    module.exports = mentions
    function mentions() {
      var Parser = this.Parser
      var tokenizers = Parser.prototype.inlineTokenizers
      var methods = Parser.prototype.inlineMethods
      // Add an inline tokenizer (defined in the following example).
      tokenizers.mention = tokenizeMention
      // Run it just before `text`.
      methods.splice(methods.indexOf('text'), 0, 'mention')


    Map of names to tokenizers (Object.<Function>). These tokenizers (such as fencedCode, table, and paragraph) eat from the start of a value to a line ending.

    See #blockMethods below for a list of methods that are included by default.


    List of blockTokenizers names (Array.<string>). Specifies the order in which tokenizers run.

    Precedence of default block methods is as follows:

    • blankLine
    • indentedCode
    • fencedCode
    • blockquote
    • atxHeading
    • thematicBreak
    • list
    • setextHeading
    • html
    • definition
    • table
    • paragraph


    Map of names to tokenizers (Object.<Function>). These tokenizers (such as url, reference, and emphasis) eat from the start of a value. To increase performance, they depend on locators.

    See #inlineMethods below for a list of methods that are included by default.


    List of inlineTokenizers names (Array.<string>). Specifies the order in which tokenizers run.

    Precedence of default inline methods is as follows:

    • escape
    • autoLink
    • url
    • email
    • html
    • link
    • reference
    • strong
    • emphasis
    • deletion
    • code
    • break
    • text

    function tokenizer(eat, value, silent)

    There are two types of tokenizers: block level and inline level. Both are functions, and work the same, but inline tokenizers must have a locator.

    The following example shows an inline tokenizer that is added by the mentions plugin above.

    tokenizeMention.notInLink = true
    tokenizeMention.locator = locateMention
    function tokenizeMention(eat, value, silent) {
      var match = /^@(\w+)/.exec(value)
      if (match) {
        if (silent) {
          return true
        return eat(match[0])({
          type: 'link',
          url: 'https://social-network/' + match[1],
          children: [{type: 'text', value: match[0]}]

    Tokenizers test whether a document starts with a certain syntactic entity. In silent mode, they return whether that test passes. In normal mode, they consume that token, a process which is called “eating”.

    Locators enable inline tokenizers to function faster by providing where the next entity may occur.

    • Node? = tokenizer(eat, value)
    • boolean? = tokenizer(eat, value, silent)
    • eat (Function) — Eat, when applicable, an entity
    • value (string) — Value which may start an entity
    • silent (boolean, optional) — Whether to detect or consume
    • locator (Function) — Required for inline tokenizers
    • onlyAtStart (boolean) — Whether nodes can only be found at the beginning of the document
    • notInBlock (boolean) — Whether nodes cannot be in block quotes or lists
    • notInList (boolean) — Whether nodes cannot be in lists
    • notInLink (boolean) — Whether nodes cannot be in links
    • boolean?, in silent mode — whether a node can be found at the start of value
    • Node?, In normal mode — If it can be found at the start of value

    tokenizer.locator(value, fromIndex)

    Locators are required for inline tokenizers. Their role is to keep parsing performant.

    The following example shows a locator that is added by the mentions tokenizer above.

    function locateMention(value, fromIndex) {
      return value.indexOf('@', fromIndex)

    Locators enable inline tokenizers to function faster by providing information on where the next entity may occur. Locators may be wrong, it’s OK if there actually isn’t a node to be found at the index they return.

    • value (string) — Value which may contain an entity
    • fromIndex (number) — Position to start searching at

    number — Index at which an entity may start, and -1 otherwise.


    var add = eat('foo')

    Eat subvalue, which is a string at the start of the tokenized value.

    • subvalue (string) - Value to eat


    add(node[, parent])

    var add = eat('foo')
    add({type: 'text', value: 'foo'})

    Add positional information to node and add node to parent.

    • node (Node) - Node to patch position on and to add
    • parent (Parent, optional) - Place to add node to in the syntax tree. Defaults to the currently processed node

    Node — The given node.


    Get the positional information that would be patched on node by add.



    add.reset(node[, parent])

    add, but resets the internal position. Useful for example in lists, where the same content is first eaten for a list, and later for list items.

    • node (Node) - Node to patch position on and insert
    • parent (Node, optional) - Place to add node to in the syntax tree. Defaults to the currently processed node

    Node — The given node.

    Turning off a tokenizer

    In some situations, you may want to turn off a tokenizer to avoid parsing that syntactic feature.

    Preferably, use the remark-disable-tokenizers plugin to turn off tokenizers.

    Alternatively, this can be done by replacing the tokenizer from blockTokenizers (or blockMethods) or inlineTokenizers (or inlineMethods).

    The following example turns off indented code blocks:

    remarkParse.Parser.prototype.blockTokenizers.indentedCode = indentedCode
    function indentedCode() {
      return true


    As Markdown is sometimes used for HTML, and improper use of HTML can open you up to a cross-site scripting (XSS) attack, use of remark can also be unsafe. When going to HTML, use remark in combination with the rehype ecosystem, and use rehype-sanitize to make the tree safe.

    Use of remark plugins could also open you up to other attacks. Carefully assess each plugin and the risks involved in using them.


    See in remarkjs/.github for ways to get started. See for ways to get help. Ideas for new plugins and tools can be posted in remarkjs/ideas.

    A curated list of awesome remark resources can be found in awesome remark.

    This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.


    MIT © Titus Wormer


    npm i remark-parse-no-trim

    DownloadsWeekly Downloads






    Unpacked Size

    116 kB

    Total Files


    Last publish


    • heiskr0