Skip to main content
POST
https://api.scriptbase.app
/
api
/
v1
/
scrape
/
crawl
curl -X POST "https://api.scriptbase.app/api/v1/scrape/crawl" \
  -H "X-API-Key: sk_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "limit": 10}'
{
  "success": true,
  "data": {
    "jobId": "job_abc123xyz",
    "status": "queued",
    "url": "https://example.com",
    "limit": 10,
    "estimatedCredits": 60
  },
  "meta": {
    "creditsUsed": 10,
    "creditsRemaining": 990,
    "rateLimit": {
      "limit": 60,
      "remaining": 59,
      "resetAt": 1704326400
    }
  }
}

Overview

Crawl multiple pages of a website and convert them to markdown. This operation runs asynchronously and returns a job ID for status polling.

Authentication

X-API-Key
string
required
Your API key

Request Body

url
string
required
Website URL to start crawling from
limit
number
default:"100"
Maximum number of pages to crawl (1-1000)
excludePatterns
string[]
URL patterns to exclude from crawling (regex patterns)
includeTables
boolean
default:"true"
Include HTML tables in the markdown output
lang
string
Language hint for better parsing (ISO 639-1)

Credits

  • Cost: 10 credits upfront + 5 credits per page scraped

Response

Returns a job ID for polling status (HTTP 202 Accepted).
success
boolean
Whether the request was successful
data
object
meta
object
curl -X POST "https://api.scriptbase.app/api/v1/scrape/crawl" \
  -H "X-API-Key: sk_your_key_here" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "limit": 10}'
{
  "success": true,
  "data": {
    "jobId": "job_abc123xyz",
    "status": "queued",
    "url": "https://example.com",
    "limit": 10,
    "estimatedCredits": 60
  },
  "meta": {
    "creditsUsed": 10,
    "creditsRemaining": 990,
    "rateLimit": {
      "limit": 60,
      "remaining": 59,
      "resetAt": 1704326400
    }
  }
}

Error Codes

StatusCodeDescription
400INVALID_INPUTInvalid URL or parameters
401INVALID_API_KEYAPI key is missing or invalid
429QUOTA_EXCEEDEDInsufficient quota for crawl
429RATE_LIMIT_EXCEEDEDToo many requests
  • Crawls are processed asynchronously. Poll Get Job Status for progress.
  • Credits are charged upfront (10) plus per page crawled (5 each).
  • Use excludePatterns to skip irrelevant sections like admin pages.
  • Maximum limit is 1000 pages per crawl.