# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Customize Requests

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fcustom%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fcustom%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fcustom%3Flanguage%3Druby%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 All Scrapfly HTTP requests can be customized with custom headers, methods, cookies and other HTTP parameters. Let's take a look at the available options.

## Method

 The scrape requests method is equivalent to the HTTP method used to call the API. For example, calling Scrapfly through `POST` will forward the request as a `POST` request to the upstream website. 
 Available methods are: `GET`, `PUT`, `POST`, `PATCH`, `HEAD`

- [GET](#get)
- [POST](#post)
- [PUT](#put)
- [PATCH](#patch)
- [HEAD](#head)
 
 `GET` request is the most common request type used in web scraping. `GET` requests are used for retrieving data from a server without providing any data in the body of the request.

 [Ruby](#player-67ae9e) [HTTP](#http-67ae9e) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/html",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fhtml
```

 

 

 

 

 `POST` requests are most commonly used to submit forms or documents. This HTTP method usually requires a `body` parameter to be sent with the request which stores the posted data. To indicate the type of posted data the `content-type` header is used and if it is not set explicitly, it'll default to `application/x-www-form-urlencoded` which stands for **urlencoded data**. Another popular alternative is `JSON` and for posting `JSON` data, the `content-type` header has to be specified as `application/json`.

 [Ruby](#player-26fdbb) [HTTP](#http-26fdbb) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/post",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
  body: "{\"example\":\"value\"}",
  headers: {
    'Content-Type' => 'application/x-www-form-urlencoded'  }
}

begin
  response = HTTParty.post(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fpost
```

 

 

 

 And here's a full example with for posting JSON and configuring the content-type header:

 [Ruby](#player-ffdefd) [HTTP](#http-ffdefd) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/post",
  'headers[content-type]' => "application/json",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
  body: "{\"example\":\"value\"}",
  headers: {
    'Content-Type' => 'application/json'  }
}

begin
  response = HTTParty.post(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fpost&headers%5Bcontent-type%5D=application%2Fjson
```

 

 

 

 

 `PUT` requests are used to submit forms and upload user-created content. When using this method, if `content-type` header is not set explicitly, it'll default to `application/x-www-form-urlencoded` as we assume you send **urlencoded data**. For putting `JSON` data, specify `content-type: application/json` header.

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/put",
  'headers[content-type]' => "application/json",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
  body: "{\"example\":\"value\"}",
  headers: {
    'Content-Type' => 'application/json'  }
}

begin
  response = HTTParty.put(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 `PATCH` requests are used to submit forms and update user-created content. When using this method, if `content-type` header is not set explicitly, it'll default to `application/x-www-form-urlencoded` as we assume you send **urlencoded data**. For patching `JSON` data, specify `content-type: application/json` header.

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/put",
  'headers[content-type]' => "application/json",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
  body: "{\"example\":\"value\"}",
  headers: {
    'Content-Type' => 'application/json'  }
}

begin
  response = HTTParty.patch(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 `HEAD` requests are used to retrieve page metadata like response headers and status codes without fetching the content. When `HEAD` method is used, headers of the upstream website are directly forwarded to the API response. This means that Scrapfly response headers match the headers of the scraped website.

> **`HEAD` do not follow redirections**, if you want so, you must retrieve the `Location` header and make a new request to that URL. 
>  
>  **NOTE:** The URL in location header is not always absolute, it can also be relative. Handle it accordingly.

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/head",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.head(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 

 

## Headers

 Request headers sent by Scrapfly can be customized through the [headers](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_headers) parameter. Note that the value of headers must be [urlencoded](https://www.w3schools.com/tags/ref_urlencode.ASP) to prevent any side effects. When in doubt, use Scrapfly's [url encoding web tool](https://scrapfly.io/web-scraping-tools/urlencode "URL encode").

 [Ruby](#player-b923aa) [HTTP](#http-b923aa) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/headers",
  'headers[foo]' => "bar",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fheaders&headers%5Bfoo%5D=bar
```

 

 

 

 *You can also pass multiple time the same header e.g: `headers[X-foo][0]=bar&headers[X-foo][1]=baz` and the order order and structure will be replicated* By default, **Scrapfly** handles most default and basic headers to replicate a real web browser so you don't need to set User-Agent or other basic headers manually. You learn more about headers in [this dedicated article](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-headers/)

> When [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection?language=ruby) is enabled, **headers are fine-tuned** on target you scrape is also done at header level to **maximize your success rate**

Important headers to keep in mind in web scraping context:

  #### Content-Type

 Specifies the media type of the resource being sent in the HTTP message body. It tells the recipient what kind of data to expect and how to interpret it. The Content-Type header is typically used in HTTP requests and responses, particularly in responses from servers to clients.

 Example, sending a `POST` request with a body of `JSON` data, you must specify `application/json`.

 By default, if you send a POST request without `Content-Type` header, `application/x-www-form-urlencoded` will be set.

> If this Header is not correctly configured, the target website respond with a `400, 406` or block you

   #### Accept

 Indicates the media types that the client is willing to receive in the response. Helps servers determine the appropriate representation of the requested resource.

 Default mimics a real web browser.

 Example for a JSON API expecting a response in JSON: `Accept: application/json`

> If this Header is not correctly configured, the target website respond with a `400` or block you

   #### Referer

 Indicates the URL of the web page from which the request originated. Often used by servers to track the source of incoming requests.

 By default, this header is not sent if not specified

 Example: `Referer: https://www.web-scraping.dev/page1.html`

> This header can be mutated while using Anti-Scraping Protection feature

 #### Behavior And Interaction With Other Features

When the [ASP is activated](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_asp) or a specific [os](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_os) is set, following headers become immutable or limited

- `user-agent`: If you set a custom chrome user agent, it will be ignored to keep our actual version
- `sec-ch-ua`: Ignored
- `sec-ch-ua-arch`: Ignored
- `sec-ch-ua-platform`: Ignored
- `sec-ch-ua-platform-version`: Ignored
- `sec-ch-ua-full-version`: Ignored
- `sec-ch-ua-bitness`: Ignored
 
With [ASP activated](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_asp), referer header is auto handled if no header set. However on specific target you might want to disable this. Setting `referer` header to `none` will prevent it

## Cookies

 Cookies are regaular HTTP [headers](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_headers) and shouldn't be treated in a special way. While most HTTP clients and libraries have a dedicated API to manage cookies, to manage cookies with Scrapfly API simply set the appropriate headers.

#### Set-Cookie

 This header should never be sent from the client’s side. It's a response header sent when upstream wants to register a cookie with the client.

#### Cookie

 This header contains the cookie values held by the client. So, when scraping this should be used to include cookie data. The `Cookie` header contains key-to-value pairs of data separated by semicolons. For example:

- Single cookie: `Cookie: test=1`
- Multiple cookie: `Cookie: test=1;lang=fr;currency=USD`
 
> You can also pass multiple `Cookie` headers to send multiple time the cookie headers Example: `headers[Cookie][0]=foo%3Dbar&headers[Cookie][1]=bar%3Dbaz` ```
> Cookie: foo=bar
> Cookie: bar=baz
> ```
> 
>  
> 
>    
> 
>  
> 
>  
> 
>   *Note: `%3D` is the [urlencoded](https://scrapfly.io/web-scraping-tools/urlencode) version of `=`, do not forget to [urlencode](https://scrapfly.io/web-scraping-tools/urlencode) the header value to not conflict with the actual url structure. Otherwise inside cookie value `=` would be interpreted as query params of the url.*

 [Ruby](#player-f4793e) [HTTP](#http-f4793e) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/cookies",
  'headers[cookie]' => "lang=fr;currency=USD;test=1",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fcookies&headers%5Bcookie%5D=lang%3Dfr%3Bcurrency%3DUSD%3Btest%3D1
```

 

 

 

## Geo Targeting

 Each Scrapfly request can be sent from a specific country. This is called Geo-Targetting and is managed by Scrapfly's proxy network.

 The desired country can be specified 2-letter country codes ([ISO 3166-1 alpha-2](https://fr.wikipedia.org/wiki/ISO_3166-1_alpha-2)). Available countries are defined on the [proxy pool](https://scrapfly.io/dashboard/proxy/scraper-api) dashboard. If the country is not available in the **Public Pool** a personal private pool can be created with desired countries. Note that restricting countries also restricts the available proxy IP pool.

 To specify geo targetting the [country](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_country) parameter can be used:

- Single country selection: `country=us`
- Multi country selection with random selection: `country=us,ca,mx`
- Multi country selection with weighted random selection (higher weights have higher probability): `country=us:1,ca:5,mx:3`
- Country exclusion: `country=-gb`
 
 For example, to send request through the United States the `country=us` would be used:

 [Ruby](#player-c9b9f2) [HTTP](#http-c9b9f2) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'country' => "us",
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/anything",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?country=us&key=&url=https%3A%2F%2Fhttpbin.dev%2Fanything
```

 

 

 

> For more on proxies, see the [proxy documentation page](https://scrapfly.io/docs/scrape-api/proxy?language=ruby)

 For spoofing the latitude and longitude of web browser's location services, the [geolocation](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_geolocation) parameter can be used, for example: `geolocation=48.856614,2.3522219` (latitude, longitude):

 [Ruby](#player-dfa45b) [HTTP](#http-dfa45b) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/anything",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?key=&url=https%3A%2F%2Fhttpbin.dev%2Fanything
```

 

 

 

 The available country options depend on the selected [proxy\_pool](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_proxy_pool). See this table for available options for your account:



 - [ `public_datacenter_pool` ](#pool-public_datacenter_pool)
- [ `public_residential_pool` ](#pool-public_residential_pool)
 
- **AE** - United Arab Emirates
- **AL** - Albania
- **AM** - Armenia
- **AR** - Argentina
- **AT** - Austria
- **AU** - Australia
- **BD** - Bangladesh
- **BE** - Belgium
- **BG** - Bulgaria
- **BO** - Bolivia
- **BR** - Brazil
- **BS** - Bahamas
- **BY** - Belarus
- **CA** - Canada
- **CH** - Switzerland
- **CL** - Chile
- **CN** - China
- **CO** - Colombia
- **CR** - Costa Rica
- **CY** - Cyprus
- **CZ** - Czechia
- **DE** - Germany
- **DK** - Denmark
- **EC** - Ecuador
- **EE** - Estonia
- **EG** - Egypt
- **ES** - Spain
- **FI** - Finland
- **FR** - France
- **GB** - United Kingdom
- **GE** - Georgia
- **GR** - Greece
- **HK** - Hong Kong SAR China
- **HN** - Honduras
- **HR** - Croatia
- **HU** - Hungary
- **ID** - Indonesia
- **IE** - Ireland
- **IL** - Israel
- **IN** - India
- **IQ** - Iraq
- **IS** - Iceland
- **IT** - Italy
- **JM** - Jamaica
- **JO** - Jordan
- **JP** - Japan
- **KH** - Cambodia
- **KR** - South Korea
- **LT** - Lithuania
- **LV** - Latvia
- **MA** - Morocco
- **MD** - Moldova
- **MG** - Madagascar
- **MN** - Mongolia
- **MX** - Mexico
- **MY** - Malaysia
- **NG** - Nigeria
- **NL** - Netherlands
- **NO** - Norway
- **NZ** - New Zealand
- **PA** - Panama
- **PE** - Peru
- **PH** - Philippines
- **PK** - Pakistan
- **PL** - Poland
- **PT** - Portugal
- **RO** - Romania
- **RU** - Russia
- **SA** - Saudi Arabia
- **SE** - Sweden
- **SG** - Singapore
- **SK** - Slovakia
- **TH** - Thailand
- **TM** - Turkmenistan
- **TN** - Tunisia
- **TR** - Türkiye
- **TW** - Taiwan
- **UA** - Ukraine
- **US** - United States
- **UZ** - Uzbekistan
- **VE** - Venezuela
- **VG** - British Virgin Islands
- **VN** - Vietnam
- **ZA** - South Africa
 
 

- **AE** - United Arab Emirates
- **AL** - Albania
- **AM** - Armenia
- **AR** - Argentina
- **AT** - Austria
- **AU** - Australia
- **BD** - Bangladesh
- **BE** - Belgium
- **BG** - Bulgaria
- **BO** - Bolivia
- **BR** - Brazil
- **BS** - Bahamas
- **BY** - Belarus
- **CA** - Canada
- **CH** - Switzerland
- **CL** - Chile
- **CN** - China
- **CO** - Colombia
- **CR** - Costa Rica
- **CY** - Cyprus
- **CZ** - Czechia
- **DE** - Germany
- **DK** - Denmark
- **EC** - Ecuador
- **EE** - Estonia
- **EG** - Egypt
- **ES** - Spain
- **FI** - Finland
- **FR** - France
- **GB** - United Kingdom
- **GE** - Georgia
- **GR** - Greece
- **HK** - Hong Kong SAR China
- **HN** - Honduras
- **HR** - Croatia
- **HU** - Hungary
- **ID** - Indonesia
- **IE** - Ireland
- **IL** - Israel
- **IN** - India
- **IQ** - Iraq
- **IS** - Iceland
- **IT** - Italy
- **JM** - Jamaica
- **JO** - Jordan
- **JP** - Japan
- **KH** - Cambodia
- **KR** - South Korea
- **LT** - Lithuania
- **LV** - Latvia
- **MA** - Morocco
- **MD** - Moldova
- **MG** - Madagascar
- **MN** - Mongolia
- **MX** - Mexico
- **MY** - Malaysia
- **NG** - Nigeria
- **NL** - Netherlands
- **NO** - Norway
- **NZ** - New Zealand
- **PA** - Panama
- **PE** - Peru
- **PH** - Philippines
- **PK** - Pakistan
- **PL** - Poland
- **PT** - Portugal
- **RO** - Romania
- **RU** - Russia
- **SA** - Saudi Arabia
- **SE** - Sweden
- **SG** - Singapore
- **SK** - Slovakia
- **TH** - Thailand
- **TM** - Turkmenistan
- **TN** - Tunisia
- **TR** - Türkiye
- **TW** - Taiwan
- **UA** - Ukraine
- **US** - United States
- **UZ** - Uzbekistan
- **VE** - Venezuela
- **VG** - British Virgin Islands
- **VN** - Vietnam
- **ZA** - South Africa
 
 

 

 



## Language

 Content language can be configured through the [lang](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_lang) parameter. By default the language is inferred from the proxy location. So, if proxy of France is used the scrape request will be configured with french language preferences.

 Behind the scenes, this is done by configuring the `Accept-Language` HTTP header. If the website supports this header and the requested language, the content will be returned in that language.

 Multiple language options can be passed as well by providing multiple comma-separated values. Country locale is also supported in `{lang iso2}-{country iso2}` format. Note that the order matters as the website will negotiate the content language based on this order.

 For example, `lang=fr,en-US,en` will result in final header `Accept-Language: fr-{proxy country iso2},fr;q=0.9,en-US;q=0.8,en;q=0.7`

> Most users prefer English regardless of the proxy location. For that, use `lang=en-US,en`

 [Ruby](#player-15c463) [HTTP](#http-15c463) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'lang' => "en-us,en",
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/anything",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?lang=en-us%2Cen&key=&url=https%3A%2F%2Fhttpbin.dev%2Fanything
```

 

 

 

##  Operating System    

> We do not recommend using this feature unless it's absolutely necessary as it can impact scraper blocking rates.

 By default, Scrapfly automatically selects the most suitable Operating System for all outgoing requests. To configure operating system explicitly the [os](https://scrapfly.io/docs/scrape-api/getting-started?language=ruby#api_param_os) parameter can be used.

 The supported values are: `win11,mac,linux`

> Because of potential conflicts, the `os` parameter and `User-Agent` header cannot be set at the same time.

 For example, to set Operating System to Windows 11 the `os=win11` parameter would be used:

 [Ruby](#player-3ad205) [HTTP](#http-3ad205) 

   [  ](https://scrapfly.io/login "Sign in to try from the API player") 

 

 ```
# gem install httparty

require 'httparty'
require 'json'

# Build query parameters
params = {
  'os' => "win11",
  'key' => "__API_KEY__",
  'url' => "https://httpbin.dev/anything",
}

url = "https://api.scrapfly.io/scrape"

options = {
  query: params,
  timeout: 160,
  open_timeout: 10,
}

begin
  response = HTTParty.get(url, options)

  # Check for HTTP errors
  unless response.success?
    error_data = response.parsed_response
    error_msg = error_data['message'] || error_data['description'] || 'Request failed'
    raise "HTTP error #{response.code}: #{error_msg}"
  end

  data = response.parsed_response
  puts JSON.pretty_generate(data)

  # Access the scrape result
  puts data['result'] if data['result']

rescue HTTParty::Error => e
  STDERR.puts "Request failed: #{e.message}"
  raise
rescue StandardError => e
  STDERR.puts "Error: #{e.message}"
  raise
end

```

 

 ```
https://api.scrapfly.io/scrape?os=win11&key=&url=https%3A%2F%2Fhttpbin.dev%2Fanything
```

 

 

 

## Integration

- [Geo Targeting example with Python SDK](https://scrapfly.io/docs/onboarding#proxy)
- [Request customization example with Python SDK](https://scrapfly.io/docs/onboarding#custom_request)
- [Request customization example with Typescript SDK](https://scrapfly.io/docs/onboarding#configuring_scrape)