     [Answers](https://scrapfly.io/blog)   /  [scrapy](https://scrapfly.io/blog/tag/scrapy)   /  [What are scrapy Item and ItemLoader objects and how to use them?](https://scrapfly.io/blog/answers/what-are-scrapy-items-and-itemloaders)   # What are scrapy Item and ItemLoader objects and how to use them?

 by [Bernardas Alisauskas](https://scrapfly.io/blog/author/bernardas) Apr 20, 2023 1 min read [\#scrapy](https://scrapfly.io/blog/tag/scrapy) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders "Share on LinkedIn")    

 

 

Scrapy's `Item` and `ItemLoader` classes are a convenient way to store and managed scraped data.

The `Item` class is a dataclass similar to Python's `@dataclass` or `pydantic.BaseModel` where data fields defined:

python```python
import scrapy 

class Person(scrapy.Item):
    name = Field()
    last_name = Field()
    bio = Field()
    age = Field()
    weight = Field()
    height = Field()
```



Whereas `ItemLoader` objects are used to populate the items with data:

python```python
import scrapy

class PersonLoader(ItemLoader):
    default_item_class = Person
    # <fieldname>_out is used to define parsing rules for each item
    name_out = lambda values: values[0]
    last_name_out = lambda values: values[0]
    bio_out = lambda values: ''.join(values).strip()
    age_out = int
    weight_out = int
    height_out = int

class MySpider(scrapy.Spider):
    ...
    def parse(self, response):
        # create loader and pass response object to it:
        loader = PersonLoader(selector=response)
        # add parsing rules like XPath:
        loader.add_xpath('full_name', "//div[contains(@class,'name')]/text()")
        loader.add_xpath('bio', "//div[contains(@class,'bio')]/text()")
        loader.add_xpath('age', "//div[@class='age']/text()")
        loader.add_xpath('weight', "//div[@class='weight']/text()")
        loader.add_xpath('height', "//div[@class='height']/text()")
        # call load item to parse data and return item:
        yield loader.load_item()
```



Here we defined parsing rules in the `PersonLoader` definition, like:

- taking the first found value for the name.
- converting numeric values to integers.
- joining all values for the bio field.

Then, to parse the response with these rules the `loader.load_item()` forming our final item.

Using `Item` and `ItemLoader` classes is the standard way to structure spider data structures in scrapy and is a convenient way to keep the data process tidy and understandable.



 

    



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fanswers%2Fwhat-are-scrapy-items-and-itemloaders) 



 ## Related Articles

 [  

 python xpath 

### Web Scraping With Scrapy: The Complete Guide in 2026

Tutorial on web scraping with scrapy and Python through a real world example project. Best practices, extension highligh...

 

 ](https://scrapfly.io/blog/posts/web-scraping-with-scrapy) [  

 python ai 

### What is Parsing? From Raw Data to Insights

Learn about the fundamentals of parsing data, across formats like JSON, XML, HTML, and PDFs. Learn how to use Python par...

 

 ](https://scrapfly.io/blog/posts/what-is-parsing-turning-data-into-insights) [     

 python screenshots 

### How to Track Web Page Changes with Automated Screenshots

There are many different ways to monitor web page changes and one of the most popular techniques is screenshot tracking....

 

 ](https://scrapfly.io/blog/posts/how-to-track-web-page-changes-using-automated-screenshots) 

  ## Related Questions

- [ Q How to pass data between scrapy callbacks in Scrapy? ](https://scrapfly.io/blog/answers/how-to-pass-data-between-scrapy-callbacks)
- [ Q What are scrapy middlewares and how to use them? ](https://scrapfly.io/blog/answers/what-are-scrapy-middlewares-and-how-to-use-them)
- [ Q What are scrapy pipelines and how to use them? ](https://scrapfly.io/blog/answers/what-are-scrapy-pipelines-and-how-to-use-them)
- [ Q How to use headless browsers with scrapy? ](https://scrapfly.io/blog/answers/how-to-use-headless-browsers-with-scrapy)
 
  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)