@spider-cloud/spider-client
TypeScript icon, indicating that this package has built-in type declarations

0.0.21 • Public • Published

Spider Cloud JavaScript SDK

The Spider Cloud JavaScript SDK offers a streamlined set of tools for web scraping and crawling, with capabilities that allow for comprehensive data extraction suitable for interfacing with AI language models. This SDK makes it easy to interact programmatically with the Spider Cloud API from any JavaScript or Node.js application.

Installation

You can install the Spider Cloud JavaScript SDK via npm:

npm install @spider-cloud/spider-client

Or with yarn:

yarn add @spider-cloud/spider-client

Configuration

Before using the SDK, you will need to provide it with your API key. Obtain an API key from spider.cloud and either pass it directly to the constructor or set it as an environment variable SPIDER_API_KEY.

Usage

Here's a basic example to demonstrate how to use the SDK:

import { Spider } from "@spider-cloud/spider-client";

// Initialize the SDK with your API key
const app = new Spider({ apiKey: "YOUR_API_KEY" });

// Scrape a URL
const url = "https://spider.cloud";
app
  .scrapeUrl(url)
  .then((data) => {
    console.log("Scraped Data:", data);
  })
  .catch((error) => {
    console.error("Scrape Error:", error);
  });

// Crawl a website
const crawlParams = {
  limit: 5,
  proxy_enabled: true,
  store_data: false,
  metadata: false,
  request: "http",
};
app
  .crawlUrl(url, crawlParams)
  .then((result) => {
    console.log("Crawl Result:", result);
  })
  .catch((error) => {
    console.error("Crawl Error:", error);
  });

Data Operations

The Spider client can interact with specific data tables to create, retrieve, and delete data.

Retrieve Data from a Table

To fetch data from a specified table by applying query parameters, use the getData method. Provide the table name and an object containing query parameters:

const tableName = "pages";
const queryParams = { limit: 20 };
spider
  .getData(tableName, queryParams)
  .then((response) => console.log(response))
  .catch((error) => console.error(error));

This example retrieves data from the 'pages' table, limiting the results to 20 entries.

Delete Data from a Table

To delete data from a specified table based on certain conditions, use the deleteData method. Provide the table name and an object specifying the conditions for deletion:

const tableName = "websites";
const deleteParams = { domain: "www.example.com" };
spider
  .deleteData(tableName, deleteParams)
  .then((response) => console.log(response))
  .catch((error) => console.error(error));

Available Methods

  • scrapeUrl(url, params): Scrape data from a specified URL. Optional parameters can be passed to customize the scraping behavior.
  • crawlUrl(url, params, stream): Begin crawling from a specific URL with optional parameters for customization and an optional streaming response.
  • links(url, params): Retrieve all links from the specified URL with optional parameters.
  • screenshot(url, params): Take a screenshot of the specified URL.
  • extractContacts(url, params): Extract contact information from the specified URL.
  • label(url, params): Apply labeling to data extracted from the specified URL.
  • getCrawlState(url, params): Check the website crawl state.
  • getCredits(): Retrieve account's remaining credits.
  • getData(table, params): Retrieve data records from the DB.
  • deleteData(table, params): Delete records from the DB.

Error Handling

The SDK provides robust error handling and will throw exceptions when it encounters critical issues. Always use .catch() on promises to handle these errors gracefully.

Contributing

Contributions are always welcome! Feel free to open an issue or submit a pull request on our GitHub repository.

License

The Spider Cloud JavaScript SDK is open-source and released under the MIT License.

Package Sidebar

Install

npm i @spider-cloud/spider-client

Weekly Downloads

140

Version

0.0.21

License

MIT

Unpacked Size

19.2 kB

Total Files

7

Last publish

Collaborators

  • jeffmendez