JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 162
  • Score
    100M100P100Q101085F

bench-rest - benchmark REST (HTTP/HTTPS) API's. Node.js client module for easy load testing / benchmarking REST API' using a simple structure/DSL can create REST flows with setup and teardown and returns (measured) metrics.

Package Exports

  • bench-rest

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (bench-rest) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

bench-rest benchmark REST API's

Node.js client module for easy load testing / benchmarking REST (HTTP/HTTPS) API's using a simple structure/DSL can create REST flows with setup and teardown and returns (measured) metrics.

Build Status

Contents on this page

## Installation

Requires node.js >= 0.8

# If using programmatically
npm install bench-rest

# OR possibly with -g option if planning to use from command line
npm install -g bench-rest
## Programmatic Usage

Simple single GET flow performing 100 iterations with 10 concurrent connections

  var benchrest = require('bench-rest');
  var flow = {
    main: [{ get: 'http://localhost:8000/' }]  // could be an array of REST operations
  };
  var runOptions = {
    limit: 10,     // concurrent connections
    iterations: 100  // number of iterations to perform
  };
  benchrest(flow, runOptions)
    .on('error', function (err, ctxName) { console.error('Failed in %s with err: ', ctxName, err); })
    .on('end', function (stats, errorCount) {
      console.log('error count: ', errorCount);
      console.log('stats', stats);
    });

See Detailed Usage section below for more details

## Command-line usage
# if installed with -g
bench-rest --help

# otherwise use from node_modules
node_modules/bin/bench-rest --help

Outputs

  Usage: bench-rest [options] <flow-js-path>

  Options:

    -h, --help                  output usage information
    -V, --version               output the version number
    -n --iterations <integer>   Number of iterations to run, defaults to 1
    -c --concurrency <integer>  Concurrent operations, defaults to 10
    -u --user <username>        User for basic authentication, default no auth
    -p --password <password>    Password for basic authentication

Typical use would be as follows:

bench-rest -n 1000 -c 50 ./examples/simple.js

which would output

Benchmarking 1000 iteration(s) using up to 50 concurrent connections

Using flow from: /Users/barczewskij/projects/bench-rest/examples/simple.js
 { main: [ { get: 'http://localhost:8000/' } ] }

errors:  0
stats:  { totalElapsed: 894,
  main:
   { meter:
      { mean: 1240.6947890818858,
        count: 1000,
        currentRate: 1240.6947890818858,
        '1MinuteRate': 0,
        '5MinuteRate': 0,
        '15MinuteRate': 0 },
     histogram:
      { min: 4,
        max: 89,
        sum: 41603,
        variance: 242.0954864864864,
        mean: 41.603,
        stddev: 15.55941793533699,
        count: 1000,
        median: 42,
        p75: 50,
        p95: 70.94999999999993,
        p99: 81.99000000000001,
        p999: 88.99900000000002 } } }

It has one expected required parameter which is the path to a node.js file which exports a REST flow. For example:

  var flow = {
    main: [{ get: 'http://localhost:8000/' }]  // could be an array of REST operations
  };

  module.exports = flow;

Check for example flows in the examples directory.

See Detailed Usage for more details on creating more advanced REST flows.

## Goals
  • Easy to create REST (HTTP/HTTPS) flows for benchmarking
  • Generate good concurrency (at least 8K concurrent connections for single proc on Mac OS X)
  • Obtain metrics from the runs with average, total, min, max, histogram, req/s
  • Allow iterations to vary easily using token subsitution
  • Run programmatically so can be used with CI server
  • Flow can have setup and teardown operations for startup and shutdown as well as for each iteration
  • Ability to automatically handles cookies separately for each iteration
  • Abilty to automatically follows redirects for operations
  • Errors will automatically stop an iterations flow and be tracked
  • Easy use and handling of etags
  • Allows pre/post processing or verification of data
## Detailed Usage

Advanced flow with setup/teardown and multiple steps to benchmark in each iteration

  var benchrest = require('bench-rest');
  var flow = {
    before: [],      // operations to do before anything
    beforeMain: [],  // operations to do before each iteration
    main: [  // the main flow for each iteration, #{INDEX} is unique iteration counter token
      { put: 'http://localhost:8000/foo_#{INDEX}', json: 'mydata_#{INDEX}' },
      { get: 'http://localhost:8000/foo_#{INDEX}' }
    ],
    afterMain: [{ del: 'http://localhost:8000/foo_#{INDEX}' }],   // operations to do after each iteration
    after: []        // operations to do after everything is done
  };
  var runOptions = {
    limit: 10,     // concurrent connections
    iterations: 100  // number of iterations to perform
  };
  var errors = [];
  benchrest(flow, runOptions)
    .on('error', function (err, ctxName) { console.error('Failed in %s with err: ', ctxName, err); })
    .on('end', function (stats, errorCount) {
      console.log('error count: ', errorCount);
      console.log('stats', stats);
    });
### Returns EventEmitter

The main function from require('bench-rest') will return a node.js EventEmitter instance when called with the flow and runOptions. This event emitter will emit the following events:

  • error - emitted as an error occurs during a run. It emits parameters err and ctxName matching where the error occurred (main, before, beforeMain, after, afterMain)
  • end - emitted when the benchmark run has finished (successfully or otherwise). It emits parameters stats and errorCount (discussed below).
#### Stats (metrics) and errorCount benchmark results

The stats is a measured data object and the errorCount is an count of the errors encountered. Time is reported in milliseconds. See measured for complete description of all the properties. https://github.com/felixge/node-measured

The stats.main will be the meter data for the main benchmark flow operations (not including the beforeMain and afterMain operations).

stats.totalElapsed is the elapsed time in milliseconds for the entire run including all setup and teardown operations

The output of the above run will look something like:

error count:  0
stats {
  totalElapsed: 151,
  main:
   { meter:
      { mean: 1190.4761904761904,
        count: 100,
        currentRate: 1190.4761904761904,
        '1MinuteRate': 0,
        '5MinuteRate': 0,
        '15MinuteRate': 0 },
     histogram:
      { min: 3,
        max: 66,
        sum: 985,
        variance: 43.502525252525245,
        mean: 9.85,
        stddev: 6.595644415258091,
        count: 100,
        median: 8.5,
        p75: 11,
        p95: 17,
        p99: 65.53999999999976,
        p999: 66 } } }
### Run options

The runOptions object can have the following properties which govern the benchmark run:

  • limit - required number of concurrent operations to limit at any given time
  • iterations - required number of flow iterations to perform on the main flow (as well as beforeMain and afterMain setup/teardown operations)
  • user - optional user to be used for basic authentication
  • password - optional password to be used for basic authentication
### REST Operations in the flow

The REST operations that need to be performed in either as part of the main flow or for setup and teardown are configured using the following flow properties.

Each array of opertions will be performed in series one after another unless an error is hit. The afterMain and after operations will be performed regardless of any errors encountered in the flow.

  var flow = {
    before: [],      // REST operations to perform before anything starts
    beforeMain: [],  // REST operations to perform before each iteration
    main: [],        // REST operations to perform for each iteration
    afterMain: [],   // REST operations to perform after each iteration
    after: []        // REST operations to perform after everything is finished
  };

Each operation can have the following properties:

### Token substitution for iteration operations

To make REST flows that are independent of each other, one often wants unique URLs and unique data, so one way to make this easy is to include special tokens in the uri, json, or data.

Currently the token(s) replaced in the uri, json, or data are:

  • #{INDEX} - replaced with the zero based counter/index of the iteration

Note: for the json property the json object is JSON.stringified, tokens substituted, then JSON.parsed back to an object so that tokens will be substituted anywhere in the structure.

### Pre/post operation processing

If an array of hooks is specified in an operation as beforeHooks and/or afterHooks then these synchronous operations will be done before/after the REST operation.

Built-in processing filters can be referred to by name using a string, while custom filters can be provided as a function, ex:

// This causes the HEAD operation to use a previously saved etag if found for this URI
// setting the If-None-Match header with it, and then if the HEAD request returns a failing
// status code
{ head: 'http://localhost:8000', beforeHooks: ['useEtag'], afterHooks: ['ignoreStatus'] }

The list of current built-in beforeHooks:

  • useEtag - if an etag had been previously saved for this URI with saveEtag afterHook, then set the appropriate header (for GET/HEAD, If-None-Match, otherwise If-Match). If was not previously saved or empty then no header is set.

The list of current built-in afterHooks:

  • saveEtag - afterHook which causes an etag to be saved into an object cache specific to this iteration. Stored by URI. If the etag was the result of a POST operation and a Location header was provided, then the URI at the Location will be used.
  • ignoreStatus - afterHookif an operation could possibly return an error code that you want to ignore and always continue anyway. Failing status codes are those that are greater than or equal to 400. Normal operation would be to terminate an iteration if there is a failure status code in any before, beforeMain, or main operation.

To create custom beforeHook or afterHook the synchronous function needs to accept an all object and return the same or possibly modified object. To exit the flow, an exception can be thrown which will be caught and emitted.

So a verification function could be written as such

function verifyData(all) {
  if (all.err) return all; // errored so just return and it will error as normal
  assert.equal(all.response.statusCode, 200);
  assert(all.body, 'foobarbaz');
  return all;
}

The properties available on the all object are:

  • all.env.index - the zero based counter for this iteration, same as what is used for #{INDEX}
  • all.env.jar - the cookie jar
  • all.env.user - basic auth user if provided
  • all.env.password - basic auth password if provided
  • all.env.etags - object of etags saved by URI
  • all.opIndex - zero based index for the operation in the array of operations, ie: first operation in the main flow will have opIndex of 0
  • all.action.requestOptions - the options that will be used for the request
  • all.err - not empty if an error has occurred
  • all.cb - the cb that will be called when done
## Why create this project?

It is important to understand how well your architecture performs and with each change to the system how performance is impacted. The best way to know this is to benchmark your system with each major change.

Benchmarking also lets you:

  • understand how your system will act under load
  • how and whether multiple servers or processes will help you scale
  • whether a feature added improved or hurt performance
  • predict the need add instances or throttle load before your server reaches overload

After attempting to use the variety of load testing clients and modules for benchmarking, none really met all of my desired goals. Most clients are only able to benchmark a single operation, not a whole flow and not one with setup and teardown.

Building your own is certainly an option but it gets tedious to make all the necessary setup and error handling to achieve a simple flow and thus this project was born.

## Tuning OS

Each OS may need some tweaking of the configuration to be able to generate or receive a large number of concurrent connections.

### Mac OS X

The Mac OS X can be tweaked using the following parameters. The configuration allowed about 8K concurrent connections for a single process.

sysctl -a | grep maxfiles  # display maxfiles and maxfilesperproc  defaults 12288 and 10240
sudo sysctl -w kern.maxfiles=25000
sudo sysctl -w kern.maxfilesperproc=24500
sysctl -a | grep somax # display max socket setting, default 128
sudo sysctl -w kern.ipc.somaxconn=20000  # set
ulimit -S -n       # display soft max open files, default 256
ulimit -H -n       # display hard max open files, default unlimited
ulimit -S -n 20000  # set soft max open files
## Key modules leveraged ## Get involved

If you have input or ideas or would like to get involved, you may:

## License - MIT