A client for doing FS operations in multiple places in sync.
Current supported targets:
- File system root paths
- Remote unix systems (via ssh)
Planned supported targets:
- Rackspace Cloud Files
- Amazon S3
var MultiFS =var client =// FS is the default type"/path/to/some/stuff"// ssh urls are desugared"ssh://user@host:path/in/home"// setting special ssh options requires using// the full object style, though.type: "ssh"host: "some-host"user: "some-user"identity: "/home/.ssh/id_rsa_some_key"path: "path/in/home"// you can use a variant of the ssh client that does file// copies by spawning scp. the options are identical to sshtype: "scp"host: "some-host"user: "some-user"identity: "/home/.ssh/id_rsa_some_key"path: "path/in/home""scp://user@host:path/in/home"// Paths are not allowed to traverse up past the parent.client
All methods take
cb as their last argument.
readFile(path, [encoding], cb)
writeFile(path, data, [encoding], cb)
writeFilep(path, data, [encoding], cb)
Callbacks are called with the following arguments:
errorFirst error encountered. If no errors are encountered, but the data returned is not consistent, then it will return an 'Inconsistent Data' error.
resultThe result of the operation. If consistent results are not found, then this will be set to null.
dataErrors, results, and extra metadata from all hosts.
For all methods except
readfile, it performs the operation on all
targets, and will raise an
Inconsistent Data error if they do not
return matching results.
readfile, it will call
md5 and compare hashes, and then, if
the results all match, it will read the actual file from the first
client that returned an md5 hash.
writeFile are atomic on all clients. It will write to a
temporary file like
foo.txt.TMP.cafef00d and then rename to
foo.txt when finished. If the write fails, it makes a best effort
attempt to unlink the temporary file.
Because different systems represent file/directory stats differently,
stat calls return a simple object with only
boolean members as the first argument. The original stat objects from
the underlying systems are returned in the
I think it'd be great to have
createReadStream(p, cb) and
createWriteStream(p, cb) methods on the client, especially since all
the targets (fs, ssh2, etc.) support streams already.
However, especially for readable streams, it's not at all clear how to
handle inconsistencies. Right now,
readFile will raise an
Inconsistent Data error if two hosts return different stuff.
However, with a readable stream, it'd have to be checking each chunk
somehow, and that gets pretty complicated.
Probably that "check multiple streams and make sure they're all producing the same data" thing should be a separate module.
For writable streams, it's a bit easier, since it's just a multiplexing pipe thing, but hasn't been done at this time.