Transparently read indexed block-gzipped (BGZF) files, such as those created by bgzip, using coordinates from the uncompressed file.
Also provides a
unzip utility function that properly decompresses BGZF chunks in both node and the browser. Uses
pako when running in the browser, native
zlib when running in node. The
unzipChunk function is another
$ npm install --save @gmod/bgzf-filehandle
const BgzfFilehandle unzip unzipChunk =const f = path: 'path/to/my_file.gz'// assumes a .gzi index exists at path/to/my_file.gz.gzi. can also// pass `gziPath` to set it explicitly. Can also pass filehandles// for the files: `filehandle` and `gziFilehandle`// supports a subset of the NodeJS v10 filehandle API. currently// just read() and stat()const myBuf = Bufferawait f// now use the data in the bufferconst size = f // stat gives the size as if the file were uncompressed// unzip takes a buffer and returns a promise for a new bufferconst chunkDataBuffer =const unzippedBuffer = await// unzipChunk takes a buffer and returns a decompressed buffer plus the offsets// of the block boundaries in the bgzip file in compressed (cpositions) and// decompressed (dpositions) coordinates// you can ignore dpositions/cpositions if your code doesn't care about stable feature IDsconst buffer dpositions cpositions = await// similar to the above unzipChunk but takes extra chunk argument and trims// off (0,chunk.minv.dataPosition) and (chunk.maxv.dataPosition)// used especially for generating stable feature IDs across chunk boundaries// normal unzip or unzipChunk can be used if this is not importantconst buffer dpositions cpositions = await
This package was written with funding from the NHGRI as part of the JBrowse project. If you use it in an academic project that you publish, please cite the most recent JBrowse paper, which will be linked from jbrowse.org.
MIT © Robert Buels