     [Blog](https://scrapfly.io/blog)   /  [api](https://scrapfly.io/blog/tag/api)   /  [How to Scrape Rightmove Real Estate Listings with Python (2026 Guide)](https://scrapfly.io/blog/posts/how-to-scrape-rightmove)   # How to Scrape Rightmove Real Estate Listings with Python (2026 Guide)

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 28, 2026 18 min read [\#api](https://scrapfly.io/blog/tag/api) [\#python](https://scrapfly.io/blog/tag/python) [\#real-estate](https://scrapfly.io/blog/tag/real-estate) [\#scrapeguide](https://scrapfly.io/blog/tag/scrapeguide) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove "Share on LinkedIn")    

 

 

   

   **Web Scraping API**Scrape any website with anti-bot bypass, proxy rotation, and JS rendering.

 

 [ Learn More  ](https://scrapfly.io/products/web-scraping-api) [  Docs ](https://scrapfly.io/docs/scrape-api/getting-started) 

 

 

Rightmove embeds structured property data in a `PAGE_MODEL` JavaScript variable on every listing, and search results live inside `__NEXT_DATA__` JSON on the search page. You can scrape both with Python, without browser rendering, from a single HTTP request each.

In this guide, you'll learn how to extract property details (price, address, photos, agent, EPC) from the `PAGE_MODEL`, resolve location names to Rightmove IDs through the typeahead API, paginate search results from `__NEXT_DATA__`, normalize the raw payloads with JMESPath, and avoid getting blocked. Let's get started.



[**scrapfly/scrapfly-scrapers on GitHub**github.com/scrapfly/scrapfly-scrapers/tree/main/rightmove-scraper](https://github.com/scrapfly/scrapfly-scrapers/tree/main/rightmove-scraper)

## Key Takeaways

- Every Rightmove property page embeds a PAGE\_MODEL JavaScript variable containing 15+ structured fields (price, coordinates, photos, agent details, EPC ratings). One HTTP request and one json.loads() gives you the full listing without browser rendering.
- Search results live in a \_\_NEXT\_DATA\_\_ script tag on the search page. You get up to 25 property cards per page with IDs, URLs, prices, and addresses as parsed JSON, not rendered HTML.
- The raw PAGE\_MODEL contains hundreds of fields including ad-targeting metadata you don't need. A JMESPath field map extracts only what matters into a clean schema in one pass, and the same map works for both sales and rental listings.
- Rightmove's anti-bot protections are lighter than Zillow or Realtor.com. Browser-like headers and 2–5 second request spacing handle most blocking without residential proxies.
- Rightmove caps search at 1,050 results (42 pages) per query. Use postcodes or outcodes instead of broad regions for full coverage of a specific area.
- For production-scale Rightmove scraping without managing UK proxy pools, Cloudflare bypass, and request throttling, [Scrapfly's Real Estate Web Scraping API](https://scrapfly.io/use-case/real-estate-web-scraping) handles all three in a single ScrapeConfig, so the PAGE\_MODEL and JMESPath parsing from this guide drops in unchanged.

[How to Scrape Real Estate Property Data using PythonIntroduction to scraping real estate property data. What is it, why and how to scrape it? We'll also list dozens of popular scraping targets and common challenges.](https://scrapfly.io/blog/posts/how-to-scrape-real-estate-property-data-using-python)

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







## What Data Can You Scrape from Rightmove?

Rightmove property pages expose prices, addresses with coordinates, bedroom and bathroom counts, agent details, photos, floorplans, EPC ratings, and nearest stations. All of it embedded as structured JSON in the page source.

The main data categories you'll extract:

- **Property details**: price, price per sqft, bedrooms, bathrooms, property type, tenure, key features, listing history
- **Location data**: display address, postcode, latitude and longitude, country code, nearest airports and stations
- **Media**: high-resolution photos with captions, floorplans, brochure PDFs, EPC certificates
- **Agent and agency info**: branch name, company name, phone number, dealer address
- **Listing metadata**: published status, archived flag, online viewings, transaction type (BUY or RENT)

Both sales and rental listings follow the same `PAGE_MODEL` shape, so one scraper covers both channels. Here's a trimmed sample output after JMESPath normalization:

json```json
{
  "id": "149360984",
  "available": true,
  "phone": "01822 667990",
  "bedrooms": 5,
  "bathrooms": 4,
  "type": "BUY",
  "property_type": "Detached",
  "title": "5 bedroom detached house for sale in Latchley, Tamar Valley, PL18",
  "price": "£1,275,000",
  "address": {
    "displayAddress": "Latchley, Tamar Valley",
    "outcode": "PL18",
    "incode": "9AX",
    "ukCountry": "England"
  },
  "latitude": 50.536003,
  "longitude": -4.243573,
  "features": [
    "Church Conversion of Immense Stature",
    "Gated Drive and Four Garages"
  ],
  "photos": [
    {"url": "https://media.rightmove.co.uk/169k/168911/149360984/168911_31451179_IMG_46_0000.jpeg", "caption": "Elevated Aspect"}
  ],
  "agency": {"branch": "Stags", "company": "Stags Tavistock"},
  "nearest_stations": [
    {"name": "Gunnislake Station", "distance": 1.9}
  ]
}
```



This output comes from a single `PAGE_MODEL` extract. The raw JSON contains hundreds more fields you can add to the schema as needed.



## How Does Rightmove Load Property Data?

Rightmove serves data through two distinct mechanisms. Individual property pages embed a `PAGE_MODEL` JavaScript variable with the full listing. Search pages embed a `__NEXT_DATA__` JSON script tag with paginated results plus location metadata. Neither mechanism requires browser rendering. Both ship the data in the initial HTML response from a standard GET request.

### How Does PAGE\_MODEL Work on Property Pages?

Every property page at `rightmove.co.uk/properties/<id>` embeds a `PAGE_MODEL` JavaScript variable inside a `<script>` tag. The variable contains every field Rightmove's own frontend needs: price, address, images, agent info, EPC data, nearest stations, brochures, floorplans, and more, all as structured JSON.



This is classic [hidden web data](https://scrapfly.io/blog/posts/how-to-scrape-hidden-web-data). The payload never appears in the rendered DOM, but it's fully exposed in the source HTML. One HTTP GET request, one XPath query, one `json.loads()`, and you have every field. No browser rendering, no waiting for JavaScript.

### How Does the Rightmove Search API Work?

Rightmove's search page at `/property-for-sale/find.html` embeds results in a `<script id="__NEXT_DATA__">` tag. The JSON at `props.pageProps.searchResults.properties` gives you up to 25 listings per page with IDs, URLs, addresses, prices, and thumbnails.

To search a specific area, you first resolve a location name to a Rightmove ID through `los.rightmove.co.uk/typeahead?query=<name>`. That endpoint returns an array of matches with `type` (`REGION`, `POSTCODE_AREA`, `OUTCODE`, etc.) and `id`. Combining them as `<type>^<id>` gives you the `locationIdentifier` for the search URL.

Rightmove caps search at 42 pages (1,050 properties) per query, so narrow queries give cleaner results than broad ones. Use a postcode or outcode when you need full coverage of a specific area.

## How to Set Up Your Rightmove Scraper

The scraper needs three Python packages: [httpx](https://pypi.org/project/httpx/) for async HTTP requests, [parsel](https://pypi.org/project/parsel/) for HTML parsing, and [jmespath](https://pypi.org/project/jmespath/) for JSON field extraction.

Install all three:

shell```shell
$ pip install httpx parsel jmespath
```



All three packages install cleanly on Python 3.9+. Every code block in this guide includes a Scrapfly SDK tab alongside the open-source version. For parsing fundamentals, see our [BeautifulSoup guide](https://scrapfly.io/blog/posts/web-scraping-with-python-beautifulsoup).



## How to Scrape Rightmove Property Pages

To scrape a Rightmove property, send a GET request with browser-like headers, parse the HTML with parsel, find the `<script>` tag holding `PAGE_MODEL`, and extract the JSON.

### How to Extract PAGE\_MODEL JSON from Property HTML

The extraction flow is straightforward: fetch the page, locate the `PAGE_MODEL` script via XPath, decode the JSON object, and pull the `propertyData` key.



Python

Scrapfly

python```python
import asyncio
import json
from typing import List
from httpx import AsyncClient, Response
from parsel import Selector

# 1. Establish HTTP client with browser-like headers
client = AsyncClient(
    headers={
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "Accept-Encoding": "gzip, deflate",
        "Accept-Language": "en-GB,en;q=0.9",
    },
    follow_redirects=True,
    http2=True,  # Enable HTTP/2 to reduce block chance
    timeout=30,
)


def find_json_objects(text: str, decoder=json.JSONDecoder()):
    """Yield decoded JSON objects from a blob of text."""
    pos = 0
    while True:
        match = text.find("{", pos)
        if match == -1:
            break
        try:
            result, index = decoder.raw_decode(text[match:])
            yield result
            pos = match + index
        except ValueError:
            pos = match + 1


def extract_property(response: Response) -> dict:
    """Extract propertyData from the PAGE_MODEL JavaScript variable."""
    selector = Selector(response.text)
    data = selector.xpath("//script[contains(.,'PAGE_MODEL = ')]/text()").get()
    if not data:
        print(f"{response.url} is not a property listing page")
        return None
    json_data = next(find_json_objects(data))
    return json_data["propertyData"]


async def scrape_properties(urls: List[str]) -> List[dict]:
    """Scrape multiple Rightmove property pages concurrently."""
    to_scrape = [client.get(url) for url in urls]
    properties = []
    for response in asyncio.as_completed(to_scrape):
        response = await response
        properties.append(parse_property(extract_property(response)))
    return properties
```





python```python
import asyncio
import json
from typing import List
from scrapfly import ScrapeApiResponse, ScrapeConfig, ScrapflyClient

# Initialize the Scrapfly client with your API key
scrapfly = ScrapflyClient(key="YOUR SCRAPFLY API KEY")


def find_json_objects(text: str, decoder=json.JSONDecoder()):
    """Yield decoded JSON objects from a blob of text."""
    pos = 0
    while True:
        match = text.find("{", pos)
        if match == -1:
            break
        try:
            result, index = decoder.raw_decode(text[match:])
            yield result
            pos = match + index
        except ValueError:
            pos = match + 1


def extract_property(result: ScrapeApiResponse) -> dict:
    """Extract propertyData from the PAGE_MODEL script tag."""
    data = result.selector.xpath("//script[contains(.,'PAGE_MODEL = ')]/text()").get()
    if not data:
        return None
    return next(find_json_objects(data))["propertyData"]


async def scrape_properties(urls: List[str]) -> List[dict]:
    """Concurrently scrape Rightmove property pages through Scrapfly."""
    # asp=True enables Anti-Scraping Protection, country="GB" routes via UK IPs
    to_scrape = [ScrapeConfig(url=url, asp=True, country="GB") for url in urls]
    properties = []
    async for result in scrapfly.concurrent_scrape(to_scrape):
        properties.append(parse_property(extract_property(result)))
    return properties
```







The `find_json_objects` helper walks the `PAGE_MODEL` string looking for balanced JSON objects. The first match is the full model. `extract_property` pulls out the `propertyData` key so you don't carry around analytics metadata and DFP ad targeting data you don't need.

### How to Parse Property Data with JMESPath

The raw `PAGE_MODEL` contains hundreds of fields, many of which are internal ad-targeting or tracking data. JMESPath lets you define a field map that extracts only what you need into a clean schema.



python```python
import jmespath
from typing import TypedDict


class PropertyResult(TypedDict):
    id: str
    available: bool
    phone: str
    bedrooms: int
    bathrooms: int
    type: str
    property_type: str
    description: str
    title: str
    price: str
    address: dict
    latitude: float
    longitude: float
    features: list
    photos: list
    floorplans: list
    agency: dict
    nearest_stations: list


def parse_property(data: dict) -> PropertyResult:
    """Reduce PAGE_MODEL data to a clean schema with JMESPath."""
    parse_map = {
        "id": "id",
        "available": "status.published",
        "phone": "contactInfo.telephoneNumbers.localNumber",
        "bedrooms": "bedrooms",
        "bathrooms": "bathrooms",
        "type": "transactionType",
        "property_type": "propertySubType",
        "description": "text.description",
        "title": "text.pageTitle",
        "price": "prices.primaryPrice",
        "address": "address",
        "latitude": "location.latitude",
        "longitude": "location.longitude",
        "features": "keyFeatures",
        "photos": "images[*].{url: url, caption: caption}",
        "floorplans": "floorplans[*].{url: url, caption: caption}",
        "agency": "customer.{branch: branchName, company: companyName, address: displayAddress}",
        "nearest_stations": "nearestStations[*].{name: name, distance: distance}",
    }
    return {key: jmespath.search(path, data) for key, path in parse_map.items()}


async def run():
    results = await scrape_properties([
        "https://www.rightmove.co.uk/properties/173825402",
        "https://www.rightmove.co.uk/properties/173790977",
        "https://www.rightmove.co.uk/properties/172859219",
    ])
    print(json.dumps(results, indent=2))


if __name__ == "__main__":
    asyncio.run(run())
```



Scrapfly

#### Scale your web scraping effortlessly

Scrapfly handles proxies, browsers, and anti-bot bypass — so you can focus on data.

[Try Free →](https://scrapfly.io/register)`parse_map` keys become your output schema. JMESPath paths like `images[*].{url: url, caption: caption}` project nested arrays into a flat shape, so you can feed the result directly into a dataframe or database without post-processing. Add or drop keys to match your pipeline.

Sample output (trimmed to the key fields):

json```json
{
  "id": "173825402",
  "available": true,
  "phone": "01752 831055",
  "bedrooms": 3,
  "bathrooms": 2,
  "type": "BUY",
  "property_type": "Detached",
  "title": "3 bedroom detached house for sale in Treledan, Saltash",
  "price": "£450,000",
  "address": {
    "displayAddress": "Treledan, Saltash, Cornwall",
    "outcode": "PL12",
    "incode": "6PR",
    "ukCountry": "England"
  },
  "latitude": 50.419,
  "longitude": -4.197,
  "features": ["Three Double Bedrooms", "En-Suite and Family Bathroom"],
  "photos": [{"url": "https://media.rightmove.co.uk/...jpeg", "caption": "Front"}],
  "agency": {"branch": "Saltash", "company": "Your Move", "address": "Fore Street, Saltash"},
  "nearest_stations": [{"name": "Saltash Station", "distance": 0.5}]
}
```



For more on JMESPath projections, see our [JMESPath tutorial](https://scrapfly.io/blog/posts/parse-json-jmespath-python).

Scraping individual properties is one half of the workflow. The other half is discovering property URLs at scale through Rightmove's search.

## How to Scrape Rightmove Search Results

Rightmove's search uses two steps: resolve a location name to a Rightmove ID through the typeahead API, then paginate the search page and extract results from `__NEXT_DATA__`.

### How to Resolve Location Names to Rightmove IDs

The typeahead API at `los.rightmove.co.uk/typeahead?query=<name>` returns JSON matches with `id`, `type`, and `displayName`. Combine `type` and `id` with a caret (`REGION^61294`) to get the `locationIdentifier` the search URL expects.



python```python
import json
from httpx import AsyncClient


async def find_locations(query: str) -> list[str]:
    """Resolve a place name to Rightmove locationIdentifier strings."""
    url = f"https://los.rightmove.co.uk/typeahead?query={query}&limit=10&exclude="
    # Referer is required; Accept: application/json forces JSON (default is XML)
    response = await client.get(url, headers={
        "Referer": "https://www.rightmove.co.uk/",
        "Accept": "application/json",
    })
    data = json.loads(response.text)
    # Combine type and id into REGION^61294 format
    return [f"{match['type']}^{match['id']}" for match in data["matches"]]
```



Running `await find_locations("cornwall")` returns:

json```json
[
  "REGION^61294",
  "REGION^997",
  "REGION^1365",
  "REGION^1057",
  "OUTCODE^1126",
  "POSTCODE^1472263"
]
```



The first match is usually the broadest region. Narrower matches (towns, postcodes, outcodes) follow. Pick the one that matches your use case and pass it to the search step. Two headers matter here: `Referer: https://www.rightmove.co.uk/` unlocks the endpoint, and `Accept: application/json` forces a JSON response (the default is XML).

### How to Paginate Through Search Results

The search page at `/property-for-sale/find.html?locationIdentifier=<id>&index=<offset>` returns 25 properties per page in `__NEXT_DATA__`. Rightmove caps search at 1,050 results per query (42 pages), so narrower location IDs give deeper coverage than broad regions.



Python

Scrapfly

python```python
import asyncio
import json
from urllib.parse import urlencode
from parsel import Selector


def extract_next_data(html: str) -> dict:
    """Pull the __NEXT_DATA__ JSON blob from a Rightmove search page."""
    selector = Selector(html)
    script = selector.xpath('//script[@id="__NEXT_DATA__"]/text()').get()
    return json.loads(script)


async def scrape_search(location_id: str, channel: str = "BUY") -> list[dict]:
    """Scrape all search pages for a given Rightmove locationIdentifier."""
    RESULTS_PER_PAGE = 25
    MAX_RESULTS = 1050  # Rightmove caps search at 42 pages

    def make_url(offset: int) -> str:
        params = {
            "locationIdentifier": location_id,
            "index": offset,
            "channel": "RES_BUY" if channel == "BUY" else "RES_LET",
            "sortType": "6",
            "radius": "0.0",
        }
        return "https://www.rightmove.co.uk/property-for-sale/find.html?" + urlencode(params)

    # Fetch the first page to discover total results
    first = await client.get(make_url(0))
    first_data = extract_next_data(first.text)["props"]["pageProps"]["searchResults"]
    total = int(str(first_data["resultCount"]).replace(",", ""))
    results = list(first_data["properties"])

    # Queue up the remaining pages concurrently
    to_scrape = []
    for offset in range(RESULTS_PER_PAGE, min(total, MAX_RESULTS), RESULTS_PER_PAGE):
        to_scrape.append(client.get(make_url(offset)))

    for response in asyncio.as_completed(to_scrape):
        response = await response
        page_data = extract_next_data(response.text)["props"]["pageProps"]["searchResults"]
        results.extend(page_data["properties"])
    return results


async def run():
    location_ids = await find_locations("cornwall")
    properties = await scrape_search(location_ids[0], channel="BUY")
    print(f"Scraped {len(properties)} properties")
    print(json.dumps(properties[0], indent=2))


if __name__ == "__main__":
    asyncio.run(run())
```





python```python
import asyncio
import json
from urllib.parse import urlencode
from scrapfly import ScrapeConfig, ScrapflyClient

scrapfly = ScrapflyClient(key="YOUR SCRAPFLY API KEY")


async def scrape_search(location_id: str, channel: str = "BUY") -> list[dict]:
    """Scrape every page of a Rightmove search through Scrapfly."""
    RESULTS_PER_PAGE = 25
    MAX_RESULTS = 1050  # Rightmove caps search at 42 pages

    def make_url(offset: int) -> str:
        params = {
            "locationIdentifier": location_id,
            "index": offset,
            "channel": "RES_BUY" if channel == "BUY" else "RES_LET",
            "sortType": "6",  # 6 = newest first
            "radius": "0.0",
        }
        return "https://www.rightmove.co.uk/property-for-sale/find.html?" + urlencode(params)

    # Fetch first page to discover total result count
    first = await scrapfly.async_scrape(ScrapeConfig(url=make_url(0), asp=True, country="GB"))
    first_data = json.loads(first.selector.xpath('//script[@id="__NEXT_DATA__"]/text()').get())
    sr = first_data["props"]["pageProps"]["searchResults"]
    total = int(str(sr["resultCount"]).replace(",", ""))
    results = list(sr["properties"])

    # Queue up all remaining pages for concurrent scraping
    to_scrape = [
        ScrapeConfig(url=make_url(offset), asp=True, country="GB")
        for offset in range(RESULTS_PER_PAGE, min(total, MAX_RESULTS), RESULTS_PER_PAGE)
    ]
    async for result in scrapfly.concurrent_scrape(to_scrape):
        data = json.loads(result.selector.xpath('//script[@id="__NEXT_DATA__"]/text()').get())
        results.extend(data["props"]["pageProps"]["searchResults"]["properties"])
    return results
```







The `index` parameter is the offset (0, 25, 50, 75, ...). `channel=RES_BUY` returns sales listings, `RES_LET` returns rentals. `sortType=6` is "newest first", which is useful for monitoring fresh listings.

Sample search result card (trimmed):

json```json
{
  "id": 174829697,
  "bedrooms": 2,
  "bathrooms": 1,
  "summary": "Situated in a popular area close to the town centre this two bedroom end of terrace town house...",
  "displayAddress": "Town Farm, Redruth",
  "propertySubType": "End of Terrace",
  "price": {
    "amount": 225000,
    "currencyCode": "GBP",
    "displayPrices": [{"displayPrice": "£225,000"}]
  },
  "propertyUrl": "/properties/174829697#/?channel=RES_BUY",
  "firstVisibleDate": "2026-04-09T10:34:00+01:00"
}
```



Each card includes a `propertyUrl` that you can feed back into `scrape_properties()` for full `PAGE_MODEL` details.

[How to Scrape Hidden APIsIn this tutorial we'll be taking a look at scraping hidden APIs which are becoming more and more common in modern dynamic websites - what's the best way to scrape them?](https://scrapfly.io/blog/posts/how-to-scrape-hidden-apis)

With the scraping flow working end-to-end, the next concern is keeping it running reliably without getting blocked.

## How to Avoid Getting Blocked When Scraping Rightmove

Rightmove uses Cloudflare and basic session validation, but its protections are lighter than platforms like Zillow or Realtor.com. Browser-like headers, request spacing, and IP rotation handle most blocking scenarios.

The protections you'll hit in practice:

- **Rate limiting**: rapid requests from one IP get throttled or return empty payloads
- **Session validation**: some endpoints expect warm cookies or a Rightmove `Referer` header
- **Cloudflare JS challenges**: rarely, on suspicious traffic patterns
- **Geo preference**: listings data flows more reliably from UK IPs

Practical mitigation:

- Use realistic browser headers (User-Agent, Accept, Accept-Language: en-GB), already set in the `AsyncClient` above
- Space requests 2 to 5 seconds apart when scraping at scale
- Rotate IPs for concurrent scraping. Datacenter proxies work fine. Residential isn't strictly required for Rightmove
- Turn on HTTP/2 (`http2=True` in httpx) so your TLS fingerprint matches real browsers more closely

For a few hundred requests with the setup above, you typically don't need anything more. Past a few thousand requests, rotate IPs and add jitter between requests.

A common failure mode is hitting `los.rightmove.co.uk/typeahead` without a `Referer` header (unauthenticated requests get a generic "not found" page) or without `Accept: application/json` (the endpoint defaults to XML when the browser-style `Accept: text/html` header is set). Set both on every typeahead call.

Once you have anti-blocking sorted, the next step for most teams is scaling concurrent scraping without managing their own proxy pool.

## Scaling Rightmove Scraping with Scrapfly



ScrapFly provides web scraping, screenshot, and extraction APIs for data collection at scale. For Rightmove specifically, Scrapfly replaces the HTTP layer in the code above without any change to your parsing logic.

Key features for Rightmove scraping:

- [Anti-Scraping Protection (ASP)](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) bypasses Cloudflare challenges and IP blocks with `asp=True`
- [Residential and datacenter proxies](https://scrapfly.io/docs/scrape-api/proxies) in the UK through `country="GB"`
- [Sticky sessions](https://scrapfly.io/docs/scrape-api/session) keep cookies consistent across paginated searches
- [Python SDK](https://scrapfly.io/docs/sdk/python) with `concurrent_scrape()` for async batch scraping

The ScrapFly tab in every code block above shows the swap. Use `scrapfly.async_scrape(ScrapeConfig(...))` instead of `client.get()`. The parsing layer (parsel, JMESPath) stays identical.

### Web Scraping API

Scrape any website with our powerful API. Anti-bot bypass, JavaScript rendering, and rotating proxies built-in.



[Try Web Scraping API](https://scrapfly.io/docs/scrape-api/getting-started)





## FAQ

Does Rightmove have a public API?Rightmove doesn't offer a public API for property data. Rightmove restricts the official API at `api-docs.rightmove.co.uk` to registered estate agents for listing management (creating, updating, and deleting their own listings). Property data for third parties is only accessible through web scraping with the `PAGE_MODEL` and `__NEXT_DATA__` techniques shown above.







Is there a paid API for Rightmove listing data?Rightmove doesn't sell API access to individual listing data. Rightmove's Data Services division offers aggregated market analytics to enterprise customers through custom contracts, but not raw listing feeds. Scraping is the practical path for developers and analysts.







How do I deal with pagination and infinite scrolling on RightMove's search pages?For pagination, identify the URL pattern for next pages. For infinite scrolling, use a headless browser to scroll down and trigger JavaScript to load more content, then extract the newly loaded data.







Why does RightMove return empty JSON data when scraping property pages?RightMove likely uses anti-bot measures that detect automated requests. They might return empty or obfuscated data to scrapers. Use rotating proxies, realistic headers, and potentially headless browsers to bypass these protections.







What are other UK real estate websites I can scrape?Besides RightMove, [Zoopla](https://scrapfly.io/blog/posts/how-to-scrape-zoopla) is another major UK real estate platform that can be scraped for property data. Both sites cover the UK market comprehensively and can be used together for complete market coverage.







Can you scrape Rightmove without Python?Yes. No-code tools like browser-extension scrapers and cloud actor platforms offer pre-built Rightmove templates that extract property data through visual interfaces. Pre-built scrapers are faster to start with but less flexible than Python when you need custom data schemas or integration into existing pipelines.









## Summary

Rightmove's `PAGE_MODEL` on property pages and `__NEXT_DATA__` on search pages give you structured JSON without browser rendering. With httpx, parsel, and JMESPath, one code path covers property details, search results, and location resolution.

The same patterns apply to tracking new listings, monitoring price changes, and building UK market datasets. For broader real estate scraping, see our guides on [Zillow](https://scrapfly.io/blog/posts/how-to-scrape-zillow), [Realtor.com](https://scrapfly.io/blog/posts/how-to-scrape-realtorcom), and [Redfin](https://scrapfly.io/blog/posts/how-to-scrape-redfin). For production-scale Rightmove scraping without managing proxy pools and anti-bot infrastructure, [Scrapfly](https://scrapfly.io) handles it with one API call.



Legal Disclaimer and PrecautionsThis tutorial covers popular web scraping techniques for education. Interacting with public servers requires diligence and respect:

- Do not scrape at rates that could damage the website.
- Do not scrape data that's not available publicly.
- Do not store PII of EU citizens protected by GDPR.
- Do not repurpose *entire* public datasets which can be illegal in some countries.

Scrapfly does not offer legal advice but these are good general rules to follow. For more you should consult a lawyer.

 

    Table of Contents- [Key Takeaways](#key-takeaways)
- [What Data Can You Scrape from Rightmove?](#what-data-can-you-scrape-from-rightmove)
- [How Does Rightmove Load Property Data?](#how-does-rightmove-load-property-data)
- [How Does PAGE\_MODEL Work on Property Pages?](#how-does-page-model-work-on-property-pages)
- [How Does the Rightmove Search API Work?](#how-does-the-rightmove-search-api-work)
- [How to Set Up Your Rightmove Scraper](#how-to-set-up-your-rightmove-scraper)
- [How to Scrape Rightmove Property Pages](#how-to-scrape-rightmove-property-pages)
- [How to Extract PAGE\_MODEL JSON from Property HTML](#how-to-extract-page-model-json-from-property-html)
- [How to Parse Property Data with JMESPath](#how-to-parse-property-data-with-jmespath)
- [How to Scrape Rightmove Search Results](#how-to-scrape-rightmove-search-results)
- [How to Resolve Location Names to Rightmove IDs](#how-to-resolve-location-names-to-rightmove-ids)
- [How to Paginate Through Search Results](#how-to-paginate-through-search-results)
- [How to Avoid Getting Blocked When Scraping Rightmove](#how-to-avoid-getting-blocked-when-scraping-rightmove)
- [Scaling Rightmove Scraping with Scrapfly](#scaling-rightmove-scraping-with-scrapfly)
- [Web Scraping API](#web-scraping-api)
- [FAQ](#faq)
- [Summary](#summary)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-rightmove) 



 ## Related Articles

 [  

 python scrapeguide 

### How to Scrape Real Estate Property Data using Python

Introduction to scraping real estate property data. What is it, why and how to scrape it? We'll also list dozens of popu...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-real-estate-property-data-using-python) [  

 python scrapeguide 

### How to Scrape Immowelt.de Real Estate Data

Immowelt.de is a major real estate website in Germany and it's suprisingly easy to scrape. In this tutorial, we'll be us...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-immowelt-de-real-estate-properties) [  

 python scrapeguide 

### How to Scrape Realtor.com - Real Estate Property Data

In this scrape guide we'll be taking a look at real estate property scraping from Realtor.com. We'll also build a tracke...

 

 ](https://scrapfly.io/blog/posts/how-to-scrape-realtorcom) 

  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)