Continuous reading from a http(s) url using random offsets and lengths
npm install random-access-http
Peers in a distributed system tend to come and go over a short period of time in many common p2p scenarios, especially when you are giving away a file without incentivizing the swarm to seed the file for a long time. There are also an abundance of free cloud hosts that let you host large files over http.
This module provides you random access to a file hosted over http so that it can be used by a client in a distributed system (such as hypercore or hyperdrive) to acquire parts of the file for itself and the other peers in the swarm.
var randomAccessHTTP =var file =// Read 10 bytes at an offset of 5file
file will use a keepalive agent to reduce the number http requests needed for the session. When you are done you should call
file.close() to destroy the agent.
var file = randomAccessHTTP(url, [options])
Create a new 'file' that reads from the provided
url can be either
https or a relative path if url is set in options.
url: string // Optionsal. The base url if first argument is relativeverbose: boolean // Optional. Default: false.timeout: number // Optional. Default: 60000maxRedirects: number // Optional. Default: 10maxContentLength: number // Optional. Default: 50MB
file.write(offset, buffer, [callback])
Not implemented! Please let us know if you have opinions on how to implement this. This will silently fail with no data being writen.
file.read(offset, length, callback)
Read a buffer at a specific offset. Callback is called with the buffer read. Currently this will fail if the server returns byte ranges different than what is requested. PRs welcome to expand the flexibility of this method to allow for servers that return fat ranges.
Close outstanding http keepalive agent sockets.
Emitted when the url has been checked to support range requests and the keep-alive agent has been created.
Emitted after the keepalive agent and its associated sockets have been destroyed.