LlamaIndex Integration
Power up LLM with web scraping
Scrapfly officially integrates with LlamaIndex framework for LLM tool development in Python. Making RAG accessible to anyone:
- Scrape any page using Web Scraping API and all of its features like cloud web browsers and blocking bypass
- RAG extend your LlamaIndex tools with web scraped documents using Scrapfly document reader
- Auto convert scraped data to Markdown, JSON or other data types for easy ingestion
Get Started with LlamaIndex Web Automation
Create a free Scrapfly Account
Install Python Packages
See Some Usage Examples!
What can LlamaIndex integration do?
from llama_index.core import VectorStoreIndex
from llama_index.llms.openai import OpenAI
from llama_index.readers.web import ScrapflyReader
scrapfly_reader = ScrapflyReader(api_key="YOUR SCRAPFLY KEY")
# 1. scrape web pages as markdown
documents = scrapfly_reader.load_data(
urls=["https://web-scraping.dev/product/1"],
scrape_config={"render_js": True}, # note here you can configure scrape options
scrape_format="markdown",
)
# 2. Create a document index for RAG:
index = VectorStoreIndex.from_documents(documents)
# 3. Prompt using any LLM like OpenAI
os.environ["OPENAI_API_KEY"] = "YOUR OPEN API KEY"
query_engine = index.as_query_engine(llm=OpenAI(model="gpt-3.5-turbo-0125"))
prompt_template = "find these product fields: {fields}"
print(query_engine.query(prompt_template(fields=["price", "title"])))
{
"price": "$9.99 from $12.99",
"title": "Box of Chocolate Candy"
}
ScrapflyReader extends LlamaIndex with the ability to scrape any page and extend your LLM operations with RAG functionality:
- Bypass scraper blocking to collected web page datasets
- Use javascript rendering to scrape all data on available the page
- Automatically convert results to markdown or json for better LLM understanding
Scrapfly integration handles all of the document retrieval challenges in your LLM applications so you can focus on delivering real AI products.
Need more functionality?
Scrapfly is accessible through Python and Typescript SDKs so you can create your own scripts and integrations in Python or any Javascript runtime like NodeJS, Deno or Bun!
The SDKs include all Scrapfly API features and many useful utilities and shortcuts making for a powerful development experience.
Transform Your Industry with Web Data
Explore web data solutions for your industry — we got you covered!
AI Training
Crawl the latest images, videos and user generated content for AI training.
Compliance
Scrape online presence to validate compliance and security.
eCommerce
Scrape products, reviews and more to enhance your eCommerce and brand awareness.
Financial Service
Scrape the latest stock, shipping and financial data to enhance your finance datasets.
Fraud Detection
Scrape products and listings to detect fraud and counterfeit activity.
Jobs Data
Scrape the latest job listings, salaries and more to enhance your job search.
Lead Generation
Scrape online profiles and contact details to enhance your lead generation.
Logistics
Scrape logistics data like shipping, tracking, container prices to enhance your deliveries.
Explore
More
Use Cases
Frequently Asked Questions
How can I web scrape using LlamaIndex?
LlamaIndex includes objects called Readers for reading external data sources. ScrapflyReader is one of such objects that can scrape any web page and return the results in rendered html, json or markdown formats for building vector indexes used in RAG applications.
How to LLM prompt websites with LlamaIndex?
LlamaIndex can scrape web pages and build an vector index of the scraped content which can be used to extend any LLM model with real-world data. This is known as RAG and for this use the ScrapflyReader to generate this index from given URLs as shown in this example.
Is it legal to web scrape using LlamaIndex?
Yes, generally web scraping publicly visible data is legal in most places around the world. However, extra consideration should be noted on scraping PII (personally identifiable information) and any copyrighted material which may be difficult to store legally in some countries due to laws like GDPR. For more see our in-depth web scraping laws article.
What is a Web Scraping API?
Web Scraping API is a service that abstracts away the complexities and challenges of web scraping and data extraction. This allows developers to focus on creating software rather than dealing with issues like web scraping blocking and other data access challenges.
What is an Extraction API?
Extraction API is a service that abstracts away the complexities and challenges of data extraction and parsing. It does this through AI auto extract and LLM prompt features as well as manual schema based instructions for precise control
What is an Screenshot API?
Screenshot API is a service that abstracts away the complexities and challenges web browser screenshot capture. This allows you to capture a screenshot of any web page while handling challenges like blocking ads and pop-ups, bypassing browser blocks, and returning the screenshot in any format of any page area you need.