Package Exports
- ipfs-unixfs-importer
- ipfs-unixfs-importer/src/dir-sharded
- ipfs-unixfs-importer/src/utils/persist
This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (ipfs-unixfs-importer) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.
Readme
ipfs-unixfs-importer
JavaScript implementation of the layout and chunking mechanisms used by IPFS to handle Files
Lead Maintainer
Table of Contents
Install
> npm install ipfs-unixfs-importerUsage
Example
Let's create a little directory to import:
> cd /tmp
> mkdir foo
> echo 'hello' > foo/bar
> echo 'world' > foo/quuxAnd write the importing logic:
const importer = require('ipfs-unixfs-importer')
// Import path /tmp/foo/bar
const source = [{
path: '/tmp/foo/bar',
content: fs.createReadStream(file)
}, {
path: '/tmp/foo/quxx',
content: fs.createReadStream(file2)
}]
// You need to create and pass an ipld-resolve instance
// https://github.com/ipld/js-ipld-resolver
for await (const entry of importer(source, ipld, options)) {
console.info(entry)
}When run, metadata about DAGNodes in the created tree is printed until the root:
{
cid: CID, // see https://github.com/multiformats/js-cid
path: 'tmp/foo/bar',
unixfs: UnixFS // see https://github.com/ipfs/js-ipfs-unixfs
}
{
cid: CID, // see https://github.com/multiformats/js-cid
path: 'tmp/foo/quxx',
unixfs: UnixFS // see https://github.com/ipfs/js-ipfs-unixfs
}
{
cid: CID, // see https://github.com/multiformats/js-cid
path: 'tmp/foo',
unixfs: UnixFS // see https://github.com/ipfs/js-ipfs-unixfs
}
{
cid: CID, // see https://github.com/multiformats/js-cid
path: 'tmp',
unixfs: UnixFS // see https://github.com/ipfs/js-ipfs-unixfs
}API
const importer = require('ipfs-unixfs-importer')const import = importer(source, ipld [, options])
The import function returns an async iterator takes a source async iterator that yields objects of the form:
{
path: 'a name',
content: (Buffer or iterator emitting Buffers)
}import will output file info objects as files get stored in IPFS. When stats on a node are emitted they are guaranteed to have been written.
ipld is an instance of the IPLD Resolver or the js-ipfs dag api
The input's file paths and directory structure will be preserved in the dag-pb created nodes.
options is an JavaScript option that might include the following keys:
wrap(boolean, defaults to false): if true, a wrapping node will be createdshardSplitThreshold(positive integer, defaults to 1000): the number of directory entries above which we decide to use a sharding directory builder (instead of the default flat one)chunker(string, defaults to"fixed"): the chunking strategy. Supports:fixedrabin
chunkerOptions(object, optional): the options for the chunker. Defaults to an object with the following properties:avgChunkSize(positive integer, defaults to262144): the average chunk size (rabin chunker only)minChunkSize(positive integer): the minimum chunk size (rabin chunker only)maxChunkSize(positive integer, defaults to262144): the maximum chunk size
strategy(string, defaults to"balanced"): the DAG builder strategy name. Supports:flat: flat list of chunksbalanced: builds a balanced treetrickle: builds a trickle tree
maxChildrenPerNode(positive integer, defaults to174): the maximum children per node for thebalancedandtrickleDAG builder strategieslayerRepeat(positive integer, defaults to 4): (only applicable to thetrickleDAG builder strategy). The maximum repetition of parent nodes for each layer of the tree.reduceSingleLeafToSelf(boolean, defaults totrue): optimization for, when reducing a set of nodes with one node, reduce it to that node.dirBuilder(object): the options for the directory builderhamt(object): the options for the HAMT sharded directory builder- bits (positive integer, defaults to
8): the number of bits at each bucket of the HAMT
- bits (positive integer, defaults to
progress(function): a function that will be called with the byte length of chunks as a file is added to ipfs.onlyHash(boolean, defaults to false): Only chunk and hash - do not write to diskhashAlg(string): multihash hashing algorithm to usecidVersion(integer, default 0): the CID version to use when storing the data (storage keys are based on the CID, including it's version)rawLeaves(boolean, defaults to false): When a file would span multiple DAGNodes, if this is true the leaf nodes will not be wrapped inUnixFSprotobufs and will instead contain the raw file bytesleafType(string, defaults to'file') what type of UnixFS node leaves should be - can be'file'or'raw'(ignored whenrawLeavesistrue)
Contribute
Feel free to join in. All welcome. Open an issue!
This repository falls under the IPFS Code of Conduct.
