Unleash awesomeness. Private packages, team management tools, and powerful integrations. Get started with npm Orgs »


7.1.0 • Public • Published


📝 alex — Catch insensitive, inconsiderate writing.

Build Status Coverage Status First timers friendly

Whether your own or someone else’s writing, alex helps you find gender favouring, polarising, race related, religion inconsiderate, or other unequal phrasing in text.

For example, when We’ve confirmed his identity is given to alex, it will warn you and suggest using their instead of his.

Suggestions, feature requests, and issues are more than welcome!

Give alex a spin on the Online demo ».


  • Helps to get better at considerate writing
  • Catches many possible offences
  • Suggests helpful alternatives
  • Reads plain-text, HTML, and markdown as input
  • Stylish


Using npm (with Node.js):

$ npm install alex --global

Using yarn:

$ yarn global add alex

Table of Contents

Command Line

Example of how alex looks on screen

Let’s say example.md looks as follows:

The boogeyman wrote all changes to the **master server**. Thus, the slaves
were read-only copies of master. But not to worry, he was a cripple.

Now, run alex on example.md:

$ alex example.md


   1:5-1:14  warning  `boogeyman` may be insensitive, use `boogey` instead                       boogeyman-boogeywoman  retext-equality
  1:42-1:48  warning  `master` / `slaves` may be insensitive, use `primary` / `replica` instead  master-slave           retext-equality
  1:69-1:75  warning  Don’t use “slaves”, it’s profane                                           slaves                 retext-profanities
  2:52-2:54  warning  `he` may be insensitive, use `they`, `it` instead                          he-she                 retext-equality
  2:61-2:68  warning  `cripple` may be insensitive, use `person with a limp` instead             gimp                   retext-equality
⚠ 5 warnings

See $ alex --help for more information.

When no input files are given to alex, it searches for files in the current directory, doc, and docs. If --html is given, it searches for htm and html extensions. Otherwise, it searches for txt, text, md, mkd, mkdn, mkdown, ron, and markdown extensions.



$ npm install alex --save

alex is also available as an AMD, CommonJS, and globals module, uncompressed and compressed.

alex(value, config)

alex.markdown(value, config)

alex('We’ve confirmed his identity.').messages


[ { [1:17-1:20: `his` may be insensitive, when referring to a person, use `their`, `theirs`, `them` instead]
    message: '`his` may be insensitive, when referring to a person, use `their`, `theirs`, `them` instead',
    name: '1:17-1:20',
    reason: '`his` may be insensitive, when referring to a person, use `their`, `theirs`, `them` instead',
    line: 1,
    column: 17,
    location: { start: [Object], end: [Object] },
    source: 'retext-equality',
    ruleId: 'her-him',
    fatal: false } ]
  • value (VFile or string) — Markdown or plain-text
  • config (Object, optional) — See Configuration section below

VFile. You’ll probably be interested in its messages property, as demonstrated in the example above, as it holds the possible violations.

alex.html(value, config)

Works just like alex() and alex.text(), but parses it as HTML. It will break your writing out of its HTML-wrapped tags and examine them.

alex.html('<p class="black">He walked to class.</p>').messages


[ { [1:18-1:20: `He` may be insensitive, use `They`, `It` instead]
    message: '`He` may be insensitive, use `They`, `It` instead',
    name: '1:18-1:20',
    reason: '`He` may be insensitive, use `They`, `It` instead',
    line: 1,
    column: 18,
    location: { start: [Object], end: [Object] },
    source: 'retext-equality',
    ruleId: 'he-she',
    fatal: false } ]
  • value (VFile or string) — HTML content
  • config (Object, optional) — See Configuration section below

alex.text(value, config)

Works just like alex(), but does not parse as markdown (thus things like code are not ignored)

alex('The `boogeyman`.').messages // => []
alex.text('The `boogeyman`.').messages


[ { [1:6-1:15: `boogeyman` may be insensitive, use `boogey` instead]
    message: '`boogeyman` may be insensitive, use `boogey` instead',
    name: '1:6-1:15',
    reason: '`boogeyman` may be insensitive, use `boogey` instead',
    line: 1,
    column: 6,
    location: Position { start: [Object], end: [Object] },
    source: 'retext-equality',
    ruleId: 'boogeyman-boogeywoman',
    fatal: false } ]
  • value (VFile or string) — Text content
  • config (Object, optional) — See Configuration section below



alex checks for many patterns of English language, and generates warnings for:

  • Gendered work-titles, for example warning about garbageman and suggesting garbage collector instead
  • Gendered proverbs, such as warning about like a man and suggesting bravely instead, or suggesting courteous for ladylike.
  • Blunt phrases, such as warning about cripple and suggesting person with a limp instead
  • Intolerant phrasing, such as warning about using master and slave together, and suggesting primary and replica instead
  • Profanities, the least of which being butt

See retext-equality and retext-profanities for all checked rules.

alex ignores words meant literally, so “he”, He — ..., and the like are not warned about

Ignoring files

alex CLI searches for files with a markdown or text extension when given directories (e.g., $ alex . will find readme.md and foo/bar/baz.txt). To prevent files from being found by alex, add an .alexignore file.


The alex CLI will sometimes search for files. To prevent files from being found, add a file named .alexignore in one of the directories above the current working directory. The format of these files is similar to .eslintignore (which is in turn similar to .gitignore files).

For example, when working in ~/alpha/bravo/charlie, the ignore file can be in charlie, but also in ~.

The ignore file for this project itself looks as follows:

# `node_modules` is ignored by default.


Sometimes, alex makes mistakes:

A message for this sentence will pop up.


  1:15-1:18  warning  `pop` may be insensitive, use `parent` instead  dad-mom  retext-equality
⚠ 1 warning

alex can silence message through HTML comments in markdown:

<!--alex ignore dad-mom-->
A message for this sentence will **not** pop up.


readme.md: no issues found

ignore turns off messages for the thing after the comment (in this case, the paragraph). It’s also possible to turn off messages after a comment by using disable, and, turn those messages back on using enable:

<!--alex disable dad-mom-->
A message for this sentence will **not** pop up.
A message for this sentence will also **not** pop up.
Yet another sentence where a message will **not** pop up.
<!--alex enable dad-mom-->
A message for this sentence will pop up.


  9:15-9:18  warning  `pop` may be insensitive, use `parent` instead  dad-mom  retext-equality
⚠ 1 warning

Multiple messages can be controlled in one go:

<!--alex disable he-her his-hers dad-mom-->

...and all messages can be controlled by omitting all rule identifiers:

<!--alex ignore-->


Ignoring messages

alex can silence messages through .alexrc configuration:

  "allow": ["boogeyman-boogeywoman"]

...or the alex field in package.json:

  "alex": {
    "allow": ["butt"]

The allow field is expected to be an array of rule identifier strings.

All allow fields in all package.json and .alexrc files are detected and used when processing.

Next to allow, noBinary can also be passed. Setting it to true counts he and she, garbageman or garbagewoman and similar pairs as errors, whereas the default (false), treats it as OK.

Configuring Profanities

The profanity checker in alex can be configured to define the level of “sureness” to warn for. The underlying library uses cuss which has a dictionary of words that have a rating between 0 and 2 of how “sure” it is a profanity. Here is the table from the cuss documentation

Rating Use as a profanity Use in clean text Example
2 likely unlikely asshat
1 maybe maybe addict
0 unlikely likely beaver

You can define what level of profanity you want alex to warn for in the .alexrc configuration:

  "profanitySureness": 1

...or the alex field in package.json:

  "alex": {
    "profanitySureness": 1

The profanitySureness is a number that includes the level of profanity you want to check for. For example, if you set it to 1 then it will warn for level 1 and 2 profanities, but not for level 0 (unlikely).


The recommended workflow is to add alex to package.json and to run it with your tests in Travis.

You can opt to ignore warnings through alexrc files and control comments. For example, with a package.json.

A package.json file with npm scripts, and additionally using AVA for unit tests, could look as follows:

  "scripts": {
    "test-api": "ava",
    "test-doc": "alex",
    "test": "npm run test-api && npm run test-doc"
  "devDependencies": {
    "alex": "^1.0.0",
    "ava": "^0.1.0"

Alternatively, if you’re using Travis to test, set up something like the following in your .travis.yml:

 - npm test
+- alex --diff

Make sure to still install alex though!

If the --diff flag is used, and Travis is detected, unchanged lines are ignored. Using this workflow, you can merge PRs with warnings, and not be bothered by them afterwards.


Why is this named alex?

It’s a nice androgynous/unisex name, it was free on npm, I like it! 😄

Alex didn’t check “X”!

See contributing.md on how to get “X” checked by alex.


alex is built by people just like you! Check out contributing.md for ways to get started.

This project has a Code of Conduct. By interacting with this repository or community you agree to abide by its terms.


MIT © Titus Wormer


npm i alex

Downloadsweekly downloads










last publish


  • avatar
Report a vulnerability