Scrapfly offers a detailed real-time monitoring dashboard that logs all scrape requests and their results. This dashboard tracks all scrape request for the selected Scrapfly project and environment. It can be filtered and inspected for overall scraping performance:
Screenshots, Debug, Cache belongs to log and inherit of the same retention - As soon as the log is deleted, they are also deleted
FREE DISCOVERY PRO STARTUP ENTERPRISE Log retention 1 weeks Log retention 1 weeks Log retention 2 weeks Log retention 3 weeks Log retention 4 weeks
Filters allow you to sample or search logs from the monitoring section to investigate or check that everything works as expected.
Eight pre-configured time frames are available (past month, past week, past day, past three hours, past hour, past 30 minutes, past 15 minutes, past 5 minutes, past 5 minutes). You can also define an arbitrary time frame.
You can filter the following values:
|Filter for URL, support glob operator with.
|Filter for success or failed request - includes >= 500 and network errors
|Filter for a domain, the domain (TLD+n) includes a subdomain, support glob operator with
|Filter for root domain (TLD+1), do not include subdomain, support glob operator with
|Filter for method, supported values : GET, PUT, POST, PATCH
Filter for status code
|Filter based of on cost spent on API Credits
|Filter for origin, supported values : API, SCHEDULER
|Filter for retries amount
|Filter for duration
|Filter by error codes e.g:
|Filter scrape having an error
For metrics that support multiple notations, simply separate with
Chaining Filters & Multi Values
You can chain multiple filters with space.
You can filter over multiple values, in this case,
OR operator is applied.
Following operators are supported :
>=Greater or equal to
<=Lower or equal to