JSPM

  • Created
  • Published
  • Downloads 1986159
  • Score
    100M100P100Q199196F
  • License Unlicense

🤖 detect bots/crawlers/spiders via the user agent.

Package Exports

  • isbot

Readme

isbot 🤖/👨‍🦰

Detect bots/crawlers/spiders using the user agent string.

Usage

import isbot from 'isbot'

// Nodejs HTTP
isbot(request.getHeader('User-Agent'))

// ExpressJS
isbot(req.get('user-agent'))

// Browser
isbot(navigator.userAgent)

// User Agent string
isbot('Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)') // true
isbot('Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36') // false

Definitions

  • Bot. Autonomous program imitating or replacing some aspect of a human behaviour, performing repetitive tasks much faster than human users could.
  • Good bot. Automated programs who visit websites in order to collect useful information. Web crawlers, site scrapers, stress testers, preview builders and other programs are welcomed on most websites because they serve purposes of mutual benefits.
  • Bad bot. Programs which are designed to perform malicious actions, ultimately hurting businesses. Testing credential databases, DDoS attacks, spam bots.

Clarifications

What does "isbot" do?

This package aims to identify "Good bots". Those who voluntarily identify themselves by setting a unique, preferably descriptive, user agent, usually by setting a dedicated request header.

What doesn't "isbot" do?

It does not try to recognise malicious bots or programs disguising themselves as real users.

Why would I want to identify good bots?

Recognising good bots such as web crawlers is useful for multiple purposes. Although it is not recommended to serve different content to web crawlers like Googlebot, you can still elect to

  • Flag pageviews to consider with business analysis.
  • Prefer to serve cached content and relieve service load.
  • Omit third party solutions' code (tags, pixels) and reduce costs.

    It is not recommended to whitelist requests for any reason based on user agent header only. Instead other methods of identification can be added such as reverse dns lookup.

Data sources

We use external data sources on top of our own lists to keep up to date

Crawlers user agents:

Non bot user agents:

Missing something? Please open an issue

Major releases (view changelog)

Version 4

TL;DR

  • Remove additional functions other than "isbot" (extend, exclude, find)
  • Remove official support for node < 12

Version 3

TL;DR

  • Remove testing on node 6 and 8

Version 2

TL;DR

  • Change return value for isbot: true instead of matched string

Version 1

TL;DR

  • No functional change