Nonchalantly Perusing Magazines
    Have ideas to improve npm?Join in the discussion! »

    @google-cloud/bigquery-storage
    TypeScript icon, indicating that this package has built-in type declarations

    2.5.0 • Public • Published

    Google Cloud Platform logo

    Google BigQuery Storage: Node.js Client

    release level npm version codecov

    Client for the BigQuery Storage API

    A comprehensive list of changes in each version may be found in the CHANGELOG.

    Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

    Table of contents:

    Quickstart

    Before you begin

    1. Select or create a Cloud Platform project.
    2. Enable billing for your project.
    3. Enable the Google BigQuery Storage API.
    4. Set up authentication with a service account so you can access the API from your local workstation.

    Installing the client library

    npm install @google-cloud/bigquery-storage

    Using the client library

    // The read stream contains blocks of Avro-encoded bytes. We use the
    // 'avsc' library to decode these blocks. Install avsc with the following
    // command: npm install avsc
    const avro = require('avsc');
    
    // See reference documentation at
    // https://cloud.google.com/bigquery/docs/reference/storage
    const {BigQueryReadClient} = require('@google-cloud/bigquery-storage');
    
    const client = new BigQueryReadClient();
    
    async function bigqueryStorageQuickstart() {
      // Get current project ID. The read session is created in this project.
      // This project can be different from that which contains the table.
      const myProjectId = await client.getProjectId();
    
      // This example reads baby name data from the public datasets.
      const projectId = 'bigquery-public-data';
      const datasetId = 'usa_names';
      const tableId = 'usa_1910_current';
    
      const tableReference = `projects/${projectId}/datasets/${datasetId}/tables/${tableId}`;
    
      const parent = `projects/${myProjectId}`;
    
      /* We limit the output columns to a subset of those allowed in the table,
       * and set a simple filter to only report names from the state of
       * Washington (WA).
       */
      const readOptions = {
        selectedFields: ['name', 'number', 'state'],
        rowRestriction: 'state = "WA"',
      };
    
      let tableModifiers = null;
      const snapshotSeconds = 0;
    
      // Set a snapshot time if it's been specified.
      if (snapshotSeconds > 0) {
        tableModifiers = {snapshotTime: {seconds: snapshotSeconds}};
      }
    
      // API request.
      const request = {
        parent,
        readSession: {
          table: tableReference,
          // This API can also deliver data serialized in Apache Arrow format.
          // This example leverages Apache Avro.
          dataFormat: 'AVRO',
          readOptions,
          tableModifiers,
        },
      };
    
      const [session] = await client.createReadSession(request);
    
      const schema = JSON.parse(session.avroSchema.schema);
    
      const avroType = avro.Type.forSchema(schema);
    
      /* The offset requested must be less than the last
       * row read from ReadRows. Requesting a larger offset is
       * undefined.
       */
      let offset = 0;
    
      const readRowsRequest = {
        // Required stream name and optional offset. Offset requested must be less than
        // the last row read from readRows(). Requesting a larger offset is undefined.
        readStream: session.streams[0].name,
        offset,
      };
    
      const names = new Set();
      const states = [];
    
      /* We'll use only a single stream for reading data from the table. Because
       * of dynamic sharding, this will yield all the rows in the table. However,
       * if you wanted to fan out multiple readers you could do so by having a
       * reader process each individual stream.
       */
      client
        .readRows(readRowsRequest)
        .on('error', console.error)
        .on('data', data => {
          offset = data.avroRows.serializedBinaryRows.offset;
    
          try {
            // Decode all rows in buffer
            let pos;
            do {
              const decodedData = avroType.decode(
                data.avroRows.serializedBinaryRows,
                pos
              );
    
              if (decodedData.value) {
                names.add(decodedData.value.name);
    
                if (!states.includes(decodedData.value.state)) {
                  states.push(decodedData.value.state);
                }
              }
    
              pos = decodedData.offset;
            } while (pos > 0);
          } catch (error) {
            console.log(error);
          }
        })
        .on('end', () => {
          console.log(`Got ${names.size} unique names in states: ${states}`);
          console.log(`Last offset: ${offset}`);
        });
    }

    Samples

    Samples are in the samples/ directory. Each sample's README.md has instructions for running its sample.

    Sample Source Code Try it
    BigQuery Storage Quickstart source code Open in Cloud Shell

    The Google BigQuery Storage Node.js Client API Reference documentation also contains samples.

    Supported Node.js Versions

    Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

    Client libraries targeting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

    Legacy Node.js versions are supported as a best effort:

    • Legacy versions will not be tested in continuous integration.
    • Some security patches may not be able to be backported.
    • Dependencies will not be kept up-to-date, and features will not be backported.

    Legacy tags available

    • legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.

    Versioning

    This library follows Semantic Versioning.

    This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

    More Information: Google Cloud Platform Launch Stages

    Contributing

    Contributions welcome! See the Contributing Guide.

    Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

    License

    Apache Version 2.0

    See LICENSE

    Keywords

    none

    Install

    npm i @google-cloud/bigquery-storage

    DownloadsWeekly Downloads

    3,105

    Version

    2.5.0

    License

    Apache-2.0

    Unpacked Size

    1.9 MB

    Total Files

    33

    Last publish

    Collaborators

    • avatar
    • avatar
    • avatar
    • avatar