# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification](https://scrapfly.io/docs/scrape-api/specification)
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification](https://scrapfly.io/docs/crawler-api/specification)
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification](https://scrapfly.io/docs/screenshot-api/specification)
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification](https://scrapfly.io/docs/extraction-api/specification)
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)
- [Vibium](https://scrapfly.io/docs/cloud-browser-api/vibium)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

#  Go SDK

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fgolang%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fgolang%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fsdk%2Fgolang%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Go SDK is the easiest way to access Scrapfly API in **Go (Golang)**.

 It provides a client that streamlines the scraping process by:

- Handling common errors
- Automatically encoding and decoding sensitive API parameters
- Handling and simplifying concurrency
- Providing an HTML selector engine via goquery
 
> For more on Go SDK use with Scrapfly, select "Go SDK" option in Scrapfly docs top bar.

###   Step by Step Introduction 

For a hands-on introduction and example projects see our Scrapfly SDK introduction page!

 [ Discover Now  ](https://scrapfly.io/docs/onboarding/golang) 

 

 

## Installation

Install the Go SDK using `go get`:

 ```
go get github.com/scrapfly/go-scrapfly
```

 

   

 

## Quick Use

 Here's a quick preview of what the Go SDK can do:

 ```
package main

import (
	"fmt"
	"log"

	"github.com/scrapfly/go-scrapfly"
)

func main() {
	key := "{{ YOUR_API_KEY }}"

	client, err := scrapfly.New(key)
	if err != nil {
		log.Fatalf("failed to create client: %v", err)
	}

	result, err := client.Scrape(&scrapfly.ScrapeConfig{
		URL:       "https://web-scraping.dev/product/1",
		ASP:       true,              // enable scraper blocking bypass
		Country:   "US",             // set proxy country
		RenderJS:  true,              // enable headless browser
		ProxyPool: scrapfly.PublicResidentialPool,
	})
	if err != nil {
		log.Fatalf("scrape failed: %v", err)
	}

	// 1) access scraped HTML content
	fmt.Println(result.Result.Content)
	// 2) or parse it with CSS selectors via goquery
	selector, _ := result.Selector()
	fmt.Println(selector.Find("h3").First().Text())
}
```

 

   

 

 In short, we first create a `scrapfly.Client` with our Scrapfly key. Then, we use `client.Scrape()` with a `ScrapeConfig` to issue our scraping commands.

 The returned `ScrapeResult` contains result data (like page HTML), request metadata and a convenience HTML selector via `.Selector()` for further parsing.

## Configuring Scrape

 The SDK supports all features of Scrapfly API, which can be configured through the `ScrapeConfig` struct:

> For scraping websites protected against web scraping **make sure to enable [Anti Scraping Protection bypass ](https://scrapfly.io/docs/onboarding#asp)** using `ASP: true`.

 ```
result, err := client.Scrape(&scrapfly.ScrapeConfig{
	URL:    "https://web-scraping.dev/product/1",
	// Request details
	Method: "GET", // GET, POST, PUT, PATCH
	Headers: map[string]string{
		"X-Csrf-Token": "1234",
	},

	// enable scraper blocking bypass (recommended)
	ASP:     true,
	Country: "US,CA,FR", // set proxy countries

	// enable cache (recommended when developing)
	Cache:     true,
	CacheTTL:  3600, // expire cache in 1 hour (default 24h)
	Debug:     true, // enable debug info in dashboard

	// enable javascript rendering
	RenderJS:        true,
	WaitForSelector: ".review",
	RenderingWait:   5000, // 5 seconds
	JS:              "return document.title",
	AutoScroll:      true,
})
if err != nil { /* handle error */ }
```

 

   

 

 For more on available options see [API specification](https://scrapfly.io/docs/scrape-api/getting-started#spec) which is matched in the SDK where applicable.

## Handling Result

 The `ScrapeResult` object contains all data returned by Scrapfly API such as response data, API usage information, scrape metadata and more:

 ```
apiResult, _ := client.Scrape(&scrapfly.ScrapeConfig{URL: "https://web-scraping.dev/product/1"})
// get response body (HTML) and status code:
_ = apiResult.Result.Content
_ = apiResult.Result.StatusCode
// response headers:
_ = apiResult.Result.ResponseHeaders
// log url for accessing this scrape in Scrapfly dashboard:
_ = apiResult.Result.LogURL

// if RenderJS is used then browser context is available as well
// get data from javascript execution:
_ = apiResult.Result.BrowserData.JSEvaluationResult
// javascript scenario results:
_ = apiResult.Result.BrowserData.JSScenario
```

 

   

 

## Concurrent Scraping

 Use `client.ConcurrentScrape()` to scrape concurrently at your plan's concurrency limit or a provided limit:

 ```
configs := []*scrapfly.ScrapeConfig{
	{URL: "https://httpbin.dev/status/200"},
	{URL: "https://httpbin.dev/status/403"},
	{URL: "https://httpbin.dev/status/200"},
	{URL: "https://httpbin.dev/status/403"},
}
results := 0
errors := 0
for res := range client.ConcurrentScrape(configs, 0) { // 0 uses account concurrency
	if res.error != nil {
		errors++
		continue
	}
	results++
}
fmt.Printf("got %d results and %d errors\n", results, errors)
```

 

   

 

## Getting Account Details

 To access Scrapfly account information use `client.Account()`:

 ```
account, err := client.Account()
if err != nil { /* handle error */ }
fmt.Println(account.Subscription.PlanName)
```

 

   

 

## Examples

### Custom Headers

 Provide additional headers using `Headers` in `ScrapeConfig`. Note that when using `ASP=true`, Scrapfly can add additional headers automatically to prevent scraper blocking.

 ```
res, err := client.Scrape(&scrapfly.ScrapeConfig{
	URL: "https://httpbin.dev/headers",
	Headers: map[string]string{"X-My-Header": "foo"},
})
if err != nil { /* handle error */ }
fmt.Println(res.Result.Content)
```

 

   

 

### Post Form

 To post form data, set `Method: "POST"` and provide `Data`. By default it uses `application/x-www-form-urlencoded`.

 ```
res, err = client.Scrape(&scrapfly.ScrapeConfig{
	URL:    "https://httpbin.dev/post",
	Method: "POST",
	Data:   map[string]interface{}{"foo": "bar"},
})
if err != nil { /* handle error */ }
fmt.Println(res.Result.Content)
```

 

   

 

### Post JSON

 To post JSON data, set `Headers["content-type"] = "application/json"` and provide `Data`.

 ```
res, err = client.Scrape(&scrapfly.ScrapeConfig{
	URL:    "https://httpbin.dev/post",
	Method: "POST",
	Headers: map[string]string{"content-type": "application/json"},
	Data:    map[string]interface{}{"foo": "bar"},
})
if err != nil { /* handle error */ }
fmt.Println(res.Result.Content)
```

 

   

 

### Javascript Rendering

 To render pages using headless browsers using [ Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering#spec) feature set `RenderJS=true` in `ScrapeConfig`:

 ```
res, err = client.Scrape(&scrapfly.ScrapeConfig{
	URL:            "https://web-scraping.dev/product/1",
	RenderJS:       true,
	WaitForSelector: ".review", // wait for element to appear
	RenderingWait:  5000,        // or wait for a set amount of time
})
if err != nil { /* handle error */ }
fmt.Println(res.Result.Content)
```

 

   

 

### Javascript Scenario

 To execute [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario) use `JSScenario` in `ScrapeConfig` and enable `RenderJS`:

 ```
import (
	"github.com/scrapfly/go-scrapfly/js_scenario"
	"github.com/scrapfly/go-scrapfly"
	"log"
	"fmt"
)
// [...]
scenario, err := js_scenario.New().
	WaitForSelector(".review").
	Execute("return navigator.userAgent", js_scenario.WithExecuteTimeout(1000)).
	Click("#load-more-reviews").
	WaitForNavigation().
	Execute("return [...document.querySelectorAll('.review p')].map(p=>p.outerText)", js_scenario.WithExecuteTimeout(1000)).
	Build()
if err != nil {
	log.Fatal(err)
}
res, err := client.Scrape(&scrapfly.ScrapeConfig{
	URL:       "https://web-scraping.dev/product/1",
	Debug:     true,
	RenderJS:  true,
	JSScenario: scenario,
})
if err != nil { 
	log.Fatal(err)
}
fmt.Println(res.Result.BrowserData.JSScenario)
```

 

   

 

### Capturing Screenshots

 To capture screenshots use `RenderJS=true` and `Screenshots` in `ScrapeConfig`:

 ```
res, err = client.Scrape(&scrapfly.ScrapeConfig{
	URL:       "https://web-scraping.dev/product/1",
	RenderJS:  true, // enable headless browsers for screenshots
	WaitForSelector: ".review",
	Screenshots: map[string]string{
		"everything": "fullpage",
		"reviews":    "#reviews",
	},
})
if err != nil { /* handle error */ }
for name, sc := range res.Result.Screenshots {
	fmt.Println(name, sc.URL)
}

// To save a screenshot, download from the result URLs and provide your API key:
// (example only)
/*
import (
	"io"
	"net/http"
	"os"
	"path/filepath"
)

for name, sc := range res.Result.Screenshots {
	url := sc.URL + "?key={{ YOUR_API_KEY }}"
	req, _ := http.NewRequest("GET", url, nil)
	resp, err := http.DefaultClient.Do(req)
	if err != nil { continue }
	defer resp.Body.Close()
	data, _ := io.ReadAll(resp.Body)
	os.WriteFile(filepath.Join(".", fmt.Sprintf("example-screenshot-%s.%s", name, sc.Extension)), data, 0644)
}
*/
```

 

   

 

### Scraping Binary Data

 Binary data is returned **base64 encoded**. Decode it with `encoding/base64`:

 ```
import (
	"encoding/base64"
	"os"
)
res, err = client.Scrape(&scrapfly.ScrapeConfig{
	URL: "https://web-scraping.dev/assets/products/orange-chocolate-box-small-1.png",
})
if err != nil { /* handle error */ }
data, _ := base64.StdEncoding.DecodeString(res.Result.Content)
os.WriteFile("image.png", data, 0644)
```

 

   

 

### Full Documentation

 For full documentation of the Go SDK, see the [Go SDK documentation](https://pkg.go.dev/github.com/scrapfly/go-scrapfly).