4.1.0 • Public • Published


A scuttlebutt plugin which runs an artefact-server instance, providing hyper/DAT based blob storage.

Example Usage

/* setup */
const caps = require('ssb-caps')
const Config = require('ssb-config/inject')

const stack = require('secret-stack')()

const config = Config('temp', {
  path: '/tmp/ssb-hyper-blobs' +,

const ssb = stack(config)
/* adding a file */
const path = require('path')
const pull = require('pull-stream')
const file = require('pull-file')

  file(path.resolve(__dirname, '../')),
  ssb.hyperBlobs.add((err, data) => {
    // {
    //   driveAddress: 'e8b2557f90b94f559abae254802594034993f9a93da55e5f147e2e870d8f7955',
    //   blobId: '2df0cb34-77e5-40af-99ad-52d85ff67d35',
    //   readKey: '59e9eeb226bd77581d849346b2063eebac518e06e5f3637b75a15e68ce37eaab'
    // }

/* reading a file over http */
const axios = require('axios')

const { driveAddress, blobId, readKey } = data
const url = `http://localhost:26836/drive/${driveAddress}/blob/${blobId}?readKey=${readKey}&fileName=${fileName}`
// if this was e.g. an image you can use this URL directly in an <img> tag
// NOTE: the query.fileName is used to pin mimeType which is helpful for some html rendering e.g <video>

  .then(res => {
    console.log(, 600), '...')

    // ...


On plugin init, starts two servers:

  1. An http server for serving blobs
  2. A hyper server which handles p2p replication of blobs

When you request a particular blob from the http server, if you already have it locally it will be immediately served. Otherwise, it will be replicated from remote peers (where possible), then served.

When you connect to another scuttlebutt peer, if they also have ssb-hyper-blobs installed, you will make an RPC call on them, registering your hyper driveAddress with them.

Further, all scuttlebutt messages are scanned for references to driveAddresses, and these are registered.

Stores files in path.join(config.path, hyperBlobs)

Pataka mode

A pataka is an always-online peer, with the following differences:

  • it does full replication of drives (in contrast, other peers do sparse replication - only fetch files they want to view)
  • it has no encryption keys for files, so it cannot read anything it has stored / replicated
  • it does not have methods for adding files to its own drive


This module can be configured by modifying config passed into the secret-stack startup. Here's the default config values if nothing is passed in:

  hyperBlobs: {
    port: 26836,
    pataka: false,
    storeOpts: {},
    autoPrune: false
  • port Number - the port the http server will start on
  • pataka Boolean - whether the ssb instance is running in Pataka mode (see Behaviour above)
  • storeOpts Object - options that can be passed down to the internal artefact-store
  • autoPrune Boolean|Object - enables the prune function to run every X interval, removing oldest files first. It tries to keep the size of hyperBlobs from remote peers under the maxHyperBlobsSize
    • default: false - auto pruning is disabled
    • if true, then default config will be used (see below)
    • if object, you can provide custom config for the following, or leave them empty to use the default values:
      • startDelay: Number (optional) the delay in milliseconds the first times autoPrune runs:
        • default: 600000 - 10 minutes
      • intervalTime: Number (optional) the interval in milliseconds to run the autoPrune function
        • default: 3600000 - 1 hour intervals
      • maxRemoteSize: Number (optional) The max total size for hyperBlobs in bytes. When the total size of hyperBlobs exceeds this number, then pruning will take place to prune the excess
        • default: 5 * 1024 * 1024 * 1024 - 5GB
        • note "remote" becuase pruning only prunes others blobs, never your own



Runs callback function cb when internal ArtefactServer is all stood up.


Get your driveAddress. This method is available to all peers.

If running in pataka mode (or server doesn't trust you) returns null (as pataka do not have their own drive)

ssb.hyperBlobs.add(cb) => sink

Creates a pull-stream sink that works well with e.g. pull-files

  • cb Function - a callback which is run which receives err, data, where data is the info needed to read the blob:
    • driveAddress - the address of the hyper drive
    • blobId
      • an id unique to this blob (within this particular drive)
      • NOT a hash of the content (currently generated by uuid
    • readKey
      • the encryption key you need if you want to read the contents of the blob

ssb.hyperBlobs.registerDrive(driveAddress, cb)

This method is mainly for use as a remote call by a peer that has connected to you, allowing them to register their drives with you.

ssb.hyperBlobs.prune(opts, cb)

This method prunes files from your store which match specified constraints NOTE:

This only removes local copies of files you've replicated from others (files you've added will never be pruned by this function)

  • opts

    • pruneSize: Number (optional) the amount of data to try and prune:

      • default: null meanking "no limit"
      • if pruning a particular file would take you over that target pruneSize, it will be skipped
    • minSize: Number (optional) limit pruning to files whose size is >= minSize

    • maxSize: Number (optional) limit pruning to files whose size is <= maxSize

      • if ommited all files above minSize (and satisfying any other constraints) will be pruned
    • minDate: Number|Date (optional) All files that were last accessed between this date and maxDate will be pruned. The default value is 0

    • maxDate: Number|Date (optional) All files that were last accessed between minDate and this date will be pruned. The default value is todays date

    • sort: Function (optional) Sort function to be called on the files before pruning starts

      • after sort, pruning proceeds until pruneSize is reached (or end of list)
      • default: (a, b) => a.atime - b.atime (i.e. sort by access time oldest > newest)
      • See the object below for the fields on a file.
  • cb Function - a callback which is run which receives err, data, where data is an array of files that were pruned in the form of:

        filename: String, // blobId
        driveAddress: Buffer, // address of the drive the file is on
        // fields similar to those of fs.stat
        dev: Int,
        nlink: Int,
        rdev: Int,
        blksize: Int,
        ino: Int,
        mode: Int,
        uid: Int,
        gid: Int,
        size: Int,
        offset: Int,
        byteOffset: Int,
        blocks: Int,
        atime: Date,
        mtime: Date,
        ctime: Date,
        linkname: String,
        mount: String,
        metadata: Object

ssb.hyperBlobs.autoPrune.set(config, cb)

Sets config.hyperBlobs.autoPrune (see above) where config can be either Boolean or { startDelay, intervalTime, maxRemoteSize }.

This function triggers several things:

  • persistes this to the {appHome}/config file
  • updates the current start of the autoPrune process
    • stops any running interval
    • the value is true or { startDelay, intervalTime, maxRemoteSize } it starts up a new autoPrune process


Returns config.hyperBlobs.autoPrune where config can either be null when its not set or { startDelay, intervalTime, maxRemoteSize } when set. See above for default values of these fields

GET http://localhost:PORT/drive/:driveAddress/blob/:blobId?readKey=READ_KEY&start=START&end=END&mimeType=MIME&fileName=FILENAME


  • driveAddress - address of drive where blob is stored (hex)
  • blobId - id of blob within that store


  • readKey String (hex)
    • decryption key which allows you to read the content of the blob
    • required (may not be required in future to allow pulling the encrypted blob)
  • start Number (optional)
    • byte offset to begin stream
    • default: 0
  • end Number (optional)
    • byte offset to stop stream
    • default: EOF
  • mimeType String (optional)
    • help the response to encode mimeType
  • fileName String (optional)
    • provide this is you don't have the mimeType and the extension will be used to try and derive it for the response.

PORT is the whatever is configured as hyperBlobs.port


  • [ ] bound how drives are auto-registered?
    • [ ] only accept RPC registration of drives which are from people you follow / are in a group with
      • this would stop patakas being abused by people connecting who are not "members" of that pataka
    • [ ] only auto-register drives from friends messages?

Package Sidebar


npm i ssb-hyper-blobs

Weekly Downloads






Unpacked Size

55.7 kB

Total Files


Last publish


  • powersource
  • arj03
  • staltz
  • mixmix
  • cel
  • ben-tai
  • chereseeriepa