Package Exports
- undici
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (undici) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
undici
A HTTP/1.1 client, written from scratch for Node.js.
Undici means eleven in Italian. 1.1 -> 11 -> Eleven -> Undici. It is also a Stranger Things reference.
Install
npm i undici
Benchmarks
Machine: 2.8GHz AMD EPYC 7402P
Configuration: Node v14.4, HTTP/1.1 without TLS, 100 connections, Linux 5.4.12-1-lts
http - keepalive x 5,521 ops/sec ±3.37% (73 runs sampled)
undici - pipeline x 9,292 ops/sec ±4.28% (79 runs sampled)
undici - request x 11,949 ops/sec ±0.99% (85 runs sampled)
undici - stream x 12,223 ops/sec ±0.76% (85 runs sampled)
The benchmark is a simple hello world
example.
API
new undici.Client(url, opts)
A basic HTTP/1.1 client, mapped on top a single TCP/TLS connection. Keepalive is enabled by default, and it cannot be turned off.
url
can be a string or a URL
object.
It should only include the protocol, hostname, and the port.
Options:
socketTimeout
, the timeout after which a socket will time out, in milliseconds. Monitors time between activity on a connected socket. Use0
to disable it entirely. Default:30e3
milliseconds (30s).socketPath
, an IPC endpoint, either Unix domain socket or Windows named pipe. Default:null
.requestTimeout
, the timeout after which a request will time out, in milliseconds. Monitors time between request being enqueued and receiving a response. Use0
to disable it entirely. Default:30e3
milliseconds (30s).maxAbortedPayload
, the maximum number of bytes read after which an aborted response will close the connection. Closing the connection will error other inflight requests in the pipeline. Default:1048576
bytes (1MiB).pipelining
, the amount of concurrent requests to be sent over the single TCP/TLS connection according to RFC7230. Default:1
.tls
, an options object which in the case ofhttps
will be passed totls.connect
. Default:null
,maxHeaderSize
, the maximum length of request headers in bytes. Default:16384
(16KiB).headersTimeout
, the amount of time the parser will wait to receive the complete HTTP headers (Node 14 and above only). Default:30e3
milliseconds (30s).
client.request(opts, callback(err, data))
Performs a HTTP request.
Options:
path
method
body
, it can be aString
, aBuffer
,Uint8Array
or astream.Readable
.headers
, an object with header-value pairs.signal
, either anAbortController
or anEventEmitter
.requestTimeout
, the timeout after which a request will time out, in milliseconds. Monitors time between request being enqueued and receiving a response. Use0
to disable it entirely. Default:30e3
milliseconds (30s).idempotent
, whether the requests can be safely retried or not. Iffalse
the request won't be sent until all preceeding requests in the pipeline has completed. Default:true
ifmethod
isHEAD
orGET
.
Headers are represented by an object like this:
{
'content-length': '123',
'content-type': 'text/plain',
connection: 'keep-alive',
host: 'mysite.com',
accept: '*/*'
}
Keys are lowercased. Values are not modified.
If you don't specify a host
header, it will be derived from the url
of the client instance.
The data
parameter in callback
is defined as follow:
statusCode
headers
body
, astream.Readable
with the body to read. A user must either fully consume or destroy the body unless there is an error, or no further requests will be processed.
headers
is an object where all keys have been lowercased.
Returns a promise if no callback is provided.
Example:
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
client.request({
path: '/',
method: 'GET'
}, function (err, data) {
if (err) {
// handle this in some way!
return
}
const {
statusCode,
headers,
body
} = data
console.log('response received', statusCode)
console.log('headers', headers)
body.setEncoding('utf8')
body.on('data', console.log)
client.close()
})
Non-idempotent requests will not be pipelined in order to avoid indirect failures.
Idempotent requests will be automatically retried if they fail due to indirect failure from the request at the head of the pipeline. This does not apply to idempotent requests with a stream request body.
Aborting a request
A request can may be aborted using either an AbortController
or an EventEmitter
.
To use AbortController
, you will need to npm i abort-controller
.
const { AbortController } = require('abort-controller')
const { Client } = require('undici')
const client = new Client('http://localhost:3000')
const abortController = new AbortController()
client.request({
path: '/',
method: 'GET',
signal: abortController.signal
}, function (err, data) {
console.log(err) // RequestAbortedError
client.close()
})
abortController.abort()
Alternatively, any EventEmitter
that emits an 'abort'
event may be used as an abort controller:
const EventEmitter = require('events')
const { Client } = require('undici')
const client = new Client'http://localhost:3000')
const ee = new EventEmitter()
client.request({
path: '/',
method: 'GET',
signal: ee
}, function (err, data) {
console.log(err) // RequestAbortedError
client.close()
})
ee.emit('abort')
Destroying the request or response body will have the same effect.
client.stream(opts, factory(data), callback(err))
A faster version of request
.
Unlike request
this method expects factory
to return a Writable
which the response will be
written to. This improves performance by avoiding
creating an intermediate Readable
when the user
expects to directly pipe the response body to a
Writable
.
Options:
- ... same as
client.request(opts, callback)
. opaque
, passed asopaque
tofactory
. Used to avoid creating a closure.
The data
parameter in factory
is defined as follow:
statusCode
headers
opaque
headers
is an object where all keys have been lowercased.
Returns a promise if no callback is provided.
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
const fs = require('fs')
client.stream({
path: '/',
method: 'GET',
opaque: filename
}, ({ statusCode, headers, opaque: filename }) => {
console.log('response received', statusCode)
console.log('headers', headers)
return fs.createWriteStream(filename)
}, (err) => {
if (err) {
console.error('failure', err)
} else {
console.log('success')
}
})
opaque
makes it possible to avoid creating a closure
for the factory
method:
function (req, res) {
return client.stream({ ...opts, opaque: res }, proxy)
}
Instead of:
function (req, res) {
return client.stream(opts, (data) => {
// Creates closure to capture `res`.
proxy({ ...data, opaque: res })
}
}
client.pipeline(opts, handler(data))
For easy use with stream.pipeline
.
Options:
- ... same as
client.request(opts, callback)
. objectMode
,true
if thehandler
will return an object stream.opaque
, passed asopaque
tohandler
. Used to avoid creating a closure.
The data
parameter in handler
is defined as follow:
statusCode
headers
opaque
body
, astream.Readable
with the body to read. A user must either fully consume or destroy the body unless there is an error, or no further requests will be processed.
handler
should return a Readable
from which the result will be
read. Usually it should just return the body
argument unless
some kind of transformation needs to be performed based on e.g.
headers
or statusCode
.
headers
is an object where all keys have been lowercased.
The handler
should validate the response and save any
required state. If there is an error it should be thrown.
Returns a Duplex
which writes to the request and reads from
the response.
const { Client } = require('undici')
const client = new Client(`http://localhost:3000`)
const fs = require('fs')
const stream = require('stream')
stream.pipeline(
fs.createReadStream('source.raw'),
client.pipeline({
path: '/',
method: 'PUT',
}, ({ statusCode, headers, body }) => {
if (statusCode !== 201) {
throw new Error('invalid response')
}
if (isZipped(headers)) {
return pipeline(body, unzip(), () => {})
}
return body
}),
fs.createWriteStream('response.raw'),
(err) => {
if (err) {
console.error('failed')
} else {
console.log('succeeded')
}
}
)
client.close([callback])
Closes the client and gracefully waits fo enqueued requests to complete before invoking the callback.
Returns a promise if no callback is provided.
client.destroy([err][, callback])
Destroy the client abruptly with the given err
. All the pending and running
requests will be asynchronously aborted and error. Waits until socket is closed
before invoking the callback. Since this operation is asynchronously dispatched
there might still be some progress on dispatched requests.
Returns a promise if no callback is provided.
client.pipelining
Property to get and set the pipelining factor.
client.pending
Number of queued requests.
client.running
Number of inflight requests.
client.size
Number of pending and running requests.
client.connected
True if the client has an active connection. The client will lazily
create a connection when it receives a request and will destroy it
if there is no activity for the duration of the timeout
value.
client.busy
True if pipeline is saturated or blocked. Indicicates whether dispatching further requests is meaningful.
client.closed
True after client.close()
has been called.
client.destroyed
True after client.destroyed()
has been called or client.close()
has been
called and the client shutdown has completed.
Events
'connect'
, emitted when a socket has been created and connected. The client will connect onceclient.size > 0
.'disconnect'
, emitted when socket has disconnected. The first argument of the event is the error which caused the socket to disconnect. The client will reconnect if or onceclient.size > 0
.
new undici.Pool(url, opts)
A pool of Client
connected to the same upstream target.
Options:
- ... same as
Client
. connections
, the number of clients to create. Default100
.
pool.request(opts, callback)
Calls client.request(opts, callback)
on one of the clients.
pool.stream(opts, factory, callback)
Calls client.stream(opts, factory, callback)
on one of the clients.
pool.pipeline(opts, handler)
Calls client.pipeline(opts, handler)
on one of the clients.
pool.close([callback])
Calls client.close(callback)
on all the clients.
pool.destroy([err][, callback])
Calls client.destroy(err, callback)
on all the clients.
undici.errors
Undici exposes a variety of error objects that you can use to enhance your error handling.
You can find all the error objects inside the errors
key.
const { errors } = require('undici')
Error | Error Codes | Description |
---|---|---|
InvalidArgumentError |
UND_ERR_INVALID_ARG |
passed an invalid argument. |
InvalidReturnValueError |
UND_ERR_INVALID_RETURN_VALUE |
returned an invalid value. |
SocketTimeoutError |
UND_ERR_SOCKET_TIMEOUT |
a socket exceeds the socketTimeout option. |
RequestTimeoutError |
UND_ERR_REQUEST_TIMEOUT |
a request exceeds the requestTimeout option. |
RequestAbortedError |
UND_ERR_ABORTED |
the request has been aborted by the user |
ClientDestroyedError |
UND_ERR_DESTROYED |
trying to use a destroyed client. |
ClientClosedError |
UND_ERR_CLOSED |
trying to use a closed client. |
SocketError |
UND_ERR_SOCKET |
there is an error with the socket. |
NotSupportedError |
UND_ERR_NOT_SUPPORTED |
encountered unsupported functionality. |
ContentLengthMismatchError |
UND_ERR_CONTENT_LENGTH_MISMATCH |
body does not match content-length header |
InformationalError |
UND_ERR_INFO |
expected error with reason |
Specification Compliance
This section documents parts of the HTTP/1.1 specification which Undici does not support or does not fully implement.
Informational Responses
Undici does not support 1xx informational responses and will either ignore or error them.
Expect
Undici does not support the Expect
request header field. The request
body is always immediately sent and the 100 Continue
response will be
ignored.
Refs: https://tools.ietf.org/html/rfc7231#section-5.1.1
Switching Protocols
Undici does not support the the Upgrade
request header field. A
101 Switching Protocols
response will cause an UND_ERR_NOT_SUPPORTED
error.
Refs: https://tools.ietf.org/html/rfc7230#section-6.7
Hints
Undici does not support early hints. A 103 Early Hint
response will
be ignored.
Refs: https://tools.ietf.org/html/rfc8297
Trailer
Undici does not support the the Trailer
response header field. Any response
trailer headers will be ignored.
Refs: https://tools.ietf.org/html/rfc7230#section-4.4
CONNECT
Undici doea not support the http CONNECT
method. Dispatching a CONNECT
request will cause an UND_ERR_NOT_SUPPORTED
error.
Refs: https://tools.ietf.org/html/rfc7231#section-4.3.6
Pipelining
Uncidi will only use pipelining if configured with a pipelining
factor
greater than 1
.
Undici always assumes that connections are persistent and will immediatly pipeline requests, without checking whether the connection is persistent. Hence, automatic fallback to HTTP/1.0 or HTTP/1.1 without pipelining is not supported.
Undici will immediately pipeline when retrying requests afters a failed connection. However, Undici will not retry the first remaining requests in the prior pipeline and instead error the corresponding callback/promise/stream.
Refs: https://tools.ietf.org/html/rfc2616#section-8.1.2.2
Refs: https://tools.ietf.org/html/rfc7230#section-6.3.2
Collaborators
License
MIT