JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 10996
  • Score
    100M100P100Q152596F
  • License MIT

it-tar is a streaming tar parser (and maybe a generator in the future) and nothing else. It operates purely using async iterables which means you can easily extract/parse tarballs without ever hitting the file system.

Package Exports

  • it-tar

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (it-tar) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

it-tar

build status dependencies Status JavaScript Style Guide

it-tar is a streaming tar parser (and maybe a generator in the future) and nothing else. It operates purely using async iterables which means you can easily extract/parse tarballs without ever hitting the file system. Note that you still need to gunzip your data if you have a .tar.gz.

Install

npm install it-tar

Usage

it-tar currently only extracts tarballs. Please send a PR to add packing!

It implementes USTAR with additional support for pax extended headers. It should be compatible with all popular tar distributions out there (gnutar, bsdtar etc)

Packing

TBD

Extracting

To extract a stream use tar.extract() and pipe a source iterable to it.

const Tar = require('it-tar')
const pipe = require('it-pipe')

await pipe(
  source, // An async iterable (for example a Node.js readable stream)
  Tar.extract(),
  source => {
    for await (const entry of source) {
      // entry.header is the tar header (see below)
      // entry.body is the content body (might be an empty async iterable)
      for await (const data of entry.body) {
        // do something with the data
      }
    }
    // all entries read
  }
)

The tar archive is streamed sequentially, meaning you must drain each entry's body as you get them or else the main extract stream will receive backpressure and stop reading.

Note that the body stream yields BufferList objects not Buffers.

Headers

The header object using in entry should contain the following properties. Most of these values can be found by stat'ing a file.

{
  name: 'path/to/this/entry.txt',
  size: 1314,        // entry size. defaults to 0
  mode: 0644,        // entry mode. defaults to to 0755 for dirs and 0644 otherwise
  mtime: new Date(), // last modified date for entry. defaults to now.
  type: 'file',      // type of entry. defaults to file. can be:
                     // file | link | symlink | directory | block-device
                     // character-device | fifo | contiguous-file
  linkname: 'path',  // linked file name
  uid: 0,            // uid of entry owner. defaults to 0
  gid: 0,            // gid of entry owner. defaults to 0
  uname: 'maf',      // uname of entry owner. defaults to null
  gname: 'staff',    // gname of entry owner. defaults to null
  devmajor: 0,       // device major version. defaults to 0
  devminor: 0        // device minor version. defaults to 0
}
  • it-pipe Utility to "pipe" async iterables together
  • it-reader Read an exact number of bytes from a binary (async) iterable

Contribute

Feel free to dive in! Open an issue or submit PRs.

License

MIT