Blog/ Spidra SDKs and integrations: Node, Python, Go, PHP, Ruby, Rust, n8n, and more
April 23, 2026 · 7 min read

Spidra SDKs and integrations: Node, Python, Go, PHP, Ruby, Rust, n8n, and more

Joel Olawanle
Joel Olawanle
Spidra SDKs and integrations: Node, Python, Go, PHP, Ruby, Rust, n8n, and more

Spidra now has SDKs for Node.js, Python, Go, PHP, Ruby, and Rust, along with an n8n integration for workflows.

They all sit on top of the same API, so you’re not dealing with different systems depending on what you use. The way scraping works, how jobs are created, and how results come back all stay consistent. The only difference is how you interact with it in your own environment.

This post is a straightforward overview of what’s available today, how each option fits, and where you’d actually use them. Let’s go through them one by one.

Node.js SDK

If you’re working in JavaScript or TypeScript, the Node.js SDK lets you use Spidra directly inside your app without having to manually handle requests or job polling.

Install it:

npm install spidra

Set it up with your API key:

import Spidra from "spidra";

const spidra = new Spidra({
 apiKey: process.env.SPIDRA_API_KEY,
});

From there, you can run a scrape by passing a URL and a prompt describing what you want to extract:

const job = await spidra.scrape({
 url: "https://example.com/pricing",
 prompt: "Extract all pricing plans with name and price",
});

console.log(job);

The SDK takes care of running the job and returning the result, so you don’t have to think about job states or polling. You just describe the data you want, and you get it back in a structured form.

If you’re working with more complex pages, you can still do things like interact with the page before extracting data (clicking, waiting, scrolling), or define a schema so the output comes back in a consistent shape. And if you need more control, you can submit jobs separately and fetch the results later.

You can find more examples and full usage in the Node.js SDK docs.

Python SDK

If you’re working in Python, the SDK fits well into scripts, data workflows, or backend services where you want to pull structured data from the web without building scraping logic yourself.

Install it:

pip install spidra

Set it up:

from spidra import Spidra

spidra = Spidra(api_key=os.environ["SPIDRA_API_KEY"])
Run a scrape:
job = spidra.scrape(
   url="https://example.com/pricing",
   prompt="Extract all pricing plans with name and price"
)

print(job)

The flow is straightforward. You pass a URL and a prompt, and you get structured data back. It works well for things like pulling data into scripts, feeding pipelines, or preparing datasets without adding a lot of scraping-specific code.

You can also define a schema if you need consistent output, or handle more complex pages by interacting with them before extracting data. And like the other SDKs, you can switch to manual job handling if you want to run things asynchronously in the background.

See the Python SDK docs for more examples and full usage.

Go SDK

The Go SDK fits well if you’re building backend services or systems where you want something fast, simple, and predictable.

Install it:

go get github.com/spidra-io/spidra-go

Set it up:

package main

import (
   "fmt"
   "os"

   "github.com/spidra-io/spidra-go"
)

func main() {
   client := spidra.NewClient(os.Getenv("SPIDRA_API_KEY"))
  
   job, _ := client.Scrape(spidra.ScrapeRequest{
       URL:    "https://example.com/pricing",
       Prompt: "Extract all pricing plans with name and price",
   })

   fmt.Println(job)
}

The flow is similar. You send a request with a URL and a prompt, and you get structured data back. It works well in APIs, background workers, or services where you want to pull data from the web and pass it downstream.

For more advanced cases, you can still handle jobs manually, run batch requests, or plug it into existing pipelines depending on how your system is structured.

See the Go SDK docs for more examples and full usage.

PHP SDK

If you’re working in PHP, the SDK makes it easier to bring Spidra into an existing application without having to build around the API yourself.

Install it with Composer:

composer require spidra/spidra-php

Set it up with your API key:

<?php

require 'vendor/autoload.php';

use Spidra\Spidra;

$spidra = new Spidra($_ENV['SPIDRA_API_KEY']);

Run a scrape:

<?php

$result = $spidra->scrape([
   'url' => 'https://example.com/pricing',
   'prompt' => 'Extract all pricing plans with name and price',
]);

print_r($result);

This fits nicely into PHP apps where you want to pull structured data from a page and use it inside your own system. It can work well in content workflows, internal tools, backend logic, or anything else where you already have a PHP application doing the rest of the work.

If you need more than a basic scrape, the SDK also gives you room to work with more advanced extraction flows depending on what you’re building.

See the PHP SDK docs for more examples and full usage.

Ruby SDK

If you’re working in Ruby, the SDK fits nicely into Rails apps or any Ruby-based workflow where you want to pull structured data without adding a lot of extra setup.

Install it:

gem install spidra

Or add it to your Gemfile:

gem "spidra"

Set it up:

require "spidra"

spidra = Spidra.new(api_key: ENV["SPIDRA_API_KEY"])
Run a scrape:
job = spidra.scrape(
 url: "https://example.com/pricing",
 prompt: "Extract all pricing plans with name and price"
)

puts job

This works well inside Rails apps, background jobs, or scripts where you want to fetch and use structured data as part of your existing flow.

You can keep it simple like this, or build on top of it depending on how your application is structured.

See the Ruby SDK docs for more examples and full usage.

Rust SDK

If you’re working in Rust, the SDK is useful when you want tighter control, performance, or you’re building systems where reliability and type safety matter more.

Add it to your project:

[dependencies]
spidra = "0.1"

Set it up:

use spidra::Spidra;

#[tokio::main]
async fn main() {
   let client = Spidra::new(std::env::var("SPIDRA_API_KEY").unwrap());

   let job = client.scrape(
       "https://example.com/pricing",
       "Extract all pricing plans with name and price"
   ).await;

   println!("{:?}", job);
}

The flow is still straightforward. You pass a URL and a prompt, and you get structured data back. It fits well in services, data pipelines, or systems where you want something efficient and predictable.

You can keep things simple like this, or build more complex flows depending on your use case.

See the Rust SDK docs for more examples and full usage.

Java SDK

The Java SDK is useful if you’re working in backend systems where you want strong typing and async control built into how you use Spidra.

Add it with Gradle:

implementation 'io.spidra:spidra-java-sdk:0.1.0'

Or Maven:

<dependency>
 <groupId>io.spidra</groupId>
 <artifactId>spidra-java-sdk</artifactId>
 <version>0.1.0</version>
</dependency>

Set it up:

import io.spidra.sdk.SpidraClient;
import io.spidra.sdk.model.scrape.ScrapeParams;

SpidraClient client = new SpidraClient(System.getenv("SPIDRA_API_KEY"));

Run a scrape:

ScrapeParams params = ScrapeParams.builder()
   .url("https://example.com")
   .prompt("Extract the page title and main heading")
   .build();

client.scrape().run(params)
   .thenAccept(job -> System.out.println(job.getResult().getContent()))
   .join();

The SDK uses a builder pattern for requests and CompletableFuture for async operations, so it fits naturally into Java services where you want control over how jobs are handled. It also supports things like browser actions, batch jobs, and crawling, depending on what you need.

See the Java SDK docs/repo for more examples and full usage.

n8n Integration

The n8n node is useful if you want to use Spidra inside workflows without writing code.

Instead of calling the API from an app or script, you can add Spidra directly into an n8n workflow and use it as part of a larger automation. That could be something like scraping a page, processing the result, and sending it somewhere else, all in one flow.

Install it:

npm install n8n-nodes-spidra

Once it’s available in n8n, you can use it in cases where scraping is just one part of the job. For example, you might want to extract content from a page, pass the result through another step, and then send it to a database, webhook, Slack, or another tool in the same workflow.

n8n-screenhot.png

That makes it a good fit for teams that are already using n8n for automation and want to plug web data into the systems they already have.

See the n8n integration docs for more examples and full usage.

Wrapping up

Spidra now fits into different stacks without much setup.

If you’re writing code, you can use one of the SDKs. If you’re building workflows, you can plug it into n8n. And if you prefer, you can still work directly with the API.

We’re still expanding this. SDKs like .NET, Java, Elixir, and Swift are already in progress, and we’re working towards deeper integrations with tools people already use for automation and AI workflows — things like MCP, Zapier, Make, LangChain, and LlamaIndex.

The idea is to make it easy to use Spidra wherever you’re already working, instead of building around it.

You can get started here:

  • Share this article

    Start scraping for free.

    Get 300 free credits to explore Spidra. Build your first scraper in minutes, not hours. Upgrade anytime as you scale.

    We build features around real workflows. Usually within days.