Article
By Larry Norris
SEO Expert
Published: 10/22/2025 • Technical SEO
Lower Crawl Speed: Reduce Screaming Frog’s max threads (e.g., 1–5) to limit concurrent requests and avoid triggering server rate limits.
Add Delays: Implement a delay (1–5 seconds) between requests to mimic human browsing and reduce server strain.
Change User Agent: Experiment with user agents like Googlebot or Chrome, or use a custom one, to bypass bot-specific rate limits.
Use IP Whitelisting or VPN: Request IP whitelisting for sites you control, or use a VPN/proxy to change your IP and circumvent rate limits.
Crawl Off-Peak: Schedule crawls during low-traffic periods (e.g., 2 AM–6 AM) to minimize server load and reduce 429 error risks.
When using Screaming Frog for SEO audits or website crawling, encountering a 429 Too Many Requests error can be frustrating. This error indicates that the server is rate-limiting your requests, effectively blocking your crawl due to excessive activity. In this technical blog post, we'll dive deep into what a 429 error is, why it occurs, how to prevent it, and actionable strategies to ensure smooth crawling with Screaming Frog.
A 429 Too Many Requests error is an HTTP status code that signals the client (in this case, Screaming Frog) has sent too many requests to the server within a given time frame, exceeding the server's rate limits. This is part of the HTTP/1.1 protocol, defined in RFC 6585, and is commonly used by websites to protect their servers from being overwhelmed by automated tools, bots, or excessive user activity.
When Screaming Frog triggers a 429 error, it means the target website's server has detected an unusually high volume of requests from your IP address or user agent and has temporarily blocked further requests to prevent overloading.
Screaming Frog is a powerful SEO spider that crawls websites by sending multiple HTTP requests to fetch pages, images, scripts, and other resources. However, websites often implement rate-limiting mechanisms to protect their servers from abuse or overloading. The following factors contribute to 429 errors when using Screaming Frog:
Aggressive Crawl Settings: Screaming Frog's default settings may send multiple concurrent requests (threads), which can overwhelm a server's capacity, especially for smaller or less robust websites.
Server-Side Rate Limits: Many websites, particularly those using content delivery networks (CDNs) like Cloudflare or Akamai, enforce strict rate limits to prevent bot activity or distributed denial-of-service (DDoS) attacks.
IP-Based Restrictions: Servers may limit requests from a single IP address, which can affect Screaming Frog if you're crawling from a static IP.
User Agent Detection: Some servers apply different rate limits based on the user agent. Screaming Frog's default user agent ("Screaming Frog SEO Spider") may be flagged as a bot and subjected to stricter limits.
High-Traffic Periods: Crawling during peak server usage times can increase the likelihood of hitting rate limits, as the server is already handling significant traffic.
Sensitive Websites: Some websites, especially those with limited resources or strict security policies, are more likely to enforce aggressive rate limiting.
Understanding these causes is the first step to mitigating 429 errors. Let’s explore how to prevent and resolve them.
To ensure a smooth crawling experience and avoid 429 errors, you can implement the following strategies in Screaming Frog. These solutions range from adjusting crawl settings to coordinating with website administrators.
One of the most effective ways to prevent 429 errors is to reduce the aggressiveness of your crawl by lowering the number of concurrent requests (threads) Screaming Frog sends to the server.
How to Do It:
Navigate to Configuration > Speed in Screaming Frog.
Reduce the Max Threads setting. For most websites, setting it to 5–10 threads is a good starting point. For particularly sensitive sites or when encountering frequent 429 errors, try setting it to 1 thread.
Monitor the crawl and adjust as needed to find a balance between speed and server tolerance.
Why It Works: Fewer concurrent requests reduce the server load, making it less likely to trigger rate-limiting mechanisms.
Pro Tip: If you're crawling a large website, reducing threads may slow down the process significantly. Balance this by increasing the Max URLs setting to allow a longer crawl while staying within server limits.
Adding a delay between requests can further reduce the strain on the server, giving it time to process each request before receiving the next.
How to Do It:
Go to Configuration > Speed in Screaming Frog.
Enable the Limit Crawl Speed option and set a delay (e.g., 1–5 seconds) between requests.
For very restrictive servers, you may need to increase the delay further.
Why It Works: Spacing out requests mimics human browsing behavior, reducing the likelihood of being flagged as a bot.
Pro Tip: Combine this with reduced threads for maximum effect on sensitive websites.
Some servers apply different rate limits based on the user agent. By default, Screaming Frog identifies itself as "Screaming Frog SEO Spider," which some servers may flag as a bot and restrict.
How to Do It:
Navigate to Configuration > User-Agent in Screaming Frog.
Experiment with different user agents, such as:
Googlebot: Mimics Google's crawler, which some servers may prioritize.
Bingbot: Similar to Googlebot, but may face different rate limits.
Generic Browser (e.g., Chrome): Simulates a typical user browser, which may bypass bot-specific restrictions.
Alternatively, create a custom user agent (e.g., "MyAgency-SF") if you know the site has specific whitelisting rules.
Why It Works: Servers may apply less stringent rate limits to recognized user agents, especially those associated with major search engines or browsers.
Pro Tip: If you’re crawling a site you own or have a relationship with, ask the site administrator if they whitelist specific user agents and configure Screaming Frog accordingly.
If you’re crawling a website you own or manage, or if you have a relationship with the site’s administrators, you can request to have your IP address whitelisted to bypass rate limits.
How to Do It:
Identify your static IP address (if you’re using a dynamic IP, consider using a VPN or proxy with a fixed IP).
Contact the website administrator or hosting provider and request that your IP be added to their whitelist.
If using a CDN like Cloudflare, you may need to adjust settings in the CDN’s dashboard to allow your IP.
Why It Works: Whitelisting exempts your IP from rate-limiting rules, allowing uninterrupted crawling.
Pro Tip: If you don’t have a static IP, consider using a dedicated crawling server or proxy service with a fixed IP for consistent results.
If rate limiting is tied to your IP address, using a VPN or proxy can help by changing your IP for each crawl session.
How to Do It:
Subscribe to a reputable VPN service or proxy provider.
Configure your network to route Screaming Frog’s traffic through the VPN or proxy.
Rotate IPs if the VPN/proxy service supports it to avoid repeated rate-limiting.
Why It Works: Changing your IP address can bypass IP-based rate limits, especially if the server is blocking your original IP due to excessive requests.
Pro Tip: Be cautious with proxies, as low-quality services may use shared IPs that are already flagged by servers, potentially worsening the issue.
Server load varies throughout the day, with peak times often leading to stricter rate-limiting. Scheduling crawls during off-peak hours can reduce the likelihood of 429 errors.
How to Do It:
Use Screaming Frog’s Scheduler feature (available in the paid version) to run crawls during low-traffic periods, such as late at night or early in the morning (e.g., 2 AM–6 AM in the server’s time zone).
Check the website’s analytics (if accessible) to identify low-traffic periods.
Why It Works: Servers are less likely to enforce strict rate limits when they’re under less strain from other users.
Pro Tip: If you don’t have access to the paid version, manually start crawls during off-peak times to achieve similar results.
If you’re crawling a third-party website and continue to encounter 429 errors despite optimizations, reaching out to the website administrator can be a last resort.
How to Do It:
Identify the site’s contact information (e.g., via the website’s contact page or WHOIS data).
Explain your purpose for crawling (e.g., SEO audit, research) and request adjustments to their rate-limiting policies or whitelisting for your IP or user agent.
Be professional and transparent about your intentions to build trust.
Why It Works: Administrators may be willing to accommodate legitimate crawling needs, especially if you’re performing an SEO audit for their benefit.
Pro Tip: Provide details about your crawl settings (e.g., user agent, crawl speed) to help the administrator make informed adjustments.
For particularly challenging websites, consider these advanced strategies:
Pause and Resume Crawls: If you encounter a 429 error during a crawl, pause the session in Screaming Frog, wait a few minutes (or longer, depending on the server’s rate-limit reset period), and resume. This can help avoid prolonged blocks.
Distributed Crawling: If you have access to multiple IPs or servers, distribute the crawl across them to reduce the load on any single IP. This requires advanced setup and is typically reserved for large-scale projects.
API Access: If the website offers an API, consider using it instead of crawling the front-end. APIs often have higher rate limits or dedicated access tokens. Check with the website administrator for API availability.
Monitor HTTP Headers: Some servers return specific headers (e.g., Retry-After) with 429 responses, indicating how long to wait before sending another request. Use Screaming Frog’s Custom Extraction feature to monitor these headers and adjust your crawl settings dynamically.
To minimize the risk of 429 errors in future crawls, adopt these best practices:
Start Conservative: Begin with low thread counts (e.g., 1–5) and a delay of 1–2 seconds, then gradually increase as you confirm the server’s tolerance.
Test Incrementally: Run small test crawls (e.g., 100 URLs) to gauge the server’s response before launching a full crawl.
Monitor Crawl Logs: Regularly check Screaming Frog’s Response Codes tab for 429 errors and adjust settings immediately if they appear.
Respect Robots.txt: Ensure Screaming Frog is configured to respect the website’s robots.txt file, as ignoring it may trigger stricter rate limits.
Keep Software Updated: Use the latest version of Screaming Frog, as updates often include improvements to crawling efficiency and error handling.
Encountering 429 Too Many Requests errors in Screaming Frog is a common challenge when crawling websites with strict rate-limiting policies. By understanding the causes—such as aggressive crawl settings, IP-based restrictions, or server-side protections—you can take proactive steps to prevent these errors. Adjusting crawl speed, changing user agents, implementing delays, and coordinating with website administrators are all effective strategies to ensure smooth and uninterrupted crawls.
By following the techniques outlined in this guide, you can optimize your Screaming Frog setup to respect server limits while still gathering the SEO data you need. If you’re still facing issues, don’t hesitate to experiment with advanced techniques or seek assistance from the website’s administrators. With the right approach, 429 errors can become a manageable hurdle rather than a roadblock to your SEO efforts.
Lower Crawl Speed: Reduce Screaming Frog’s max threads (e.g., 1–5) to limit concurrent requests and avoid triggering server rate limits.
Add Delays: Implement a delay (1–5 seconds) between requests to mimic human browsing and reduce server strain.
Change User Agent: Experiment with user agents like Googlebot or Chrome, or use a custom one, to bypass bot-specific rate limits.
Use IP Whitelisting or VPN: Request IP whitelisting for sites you control, or use a VPN/proxy to change your IP and circumvent rate limits.
Crawl Off-Peak: Schedule crawls during low-traffic periods (e.g., 2 AM–6 AM) to minimize server load and reduce 429 error risks.
When using Screaming Frog for SEO audits or website crawling, encountering a 429 Too Many Requests error can be frustrating. This error indicates that the server is rate-limiting your requests, effectively blocking your crawl due to excessive activity. In this technical blog post, we'll dive deep into what a 429 error is, why it occurs, how to prevent it, and actionable strategies to ensure smooth crawling with Screaming Frog.
A 429 Too Many Requests error is an HTTP status code that signals the client (in this case, Screaming Frog) has sent too many requests to the server within a given time frame, exceeding the server's rate limits. This is part of the HTTP/1.1 protocol, defined in RFC 6585, and is commonly used by websites to protect their servers from being overwhelmed by automated tools, bots, or excessive user activity.
When Screaming Frog triggers a 429 error, it means the target website's server has detected an unusually high volume of requests from your IP address or user agent and has temporarily blocked further requests to prevent overloading.
Screaming Frog is a powerful SEO spider that crawls websites by sending multiple HTTP requests to fetch pages, images, scripts, and other resources. However, websites often implement rate-limiting mechanisms to protect their servers from abuse or overloading. The following factors contribute to 429 errors when using Screaming Frog:
Aggressive Crawl Settings: Screaming Frog's default settings may send multiple concurrent requests (threads), which can overwhelm a server's capacity, especially for smaller or less robust websites.
Server-Side Rate Limits: Many websites, particularly those using content delivery networks (CDNs) like Cloudflare or Akamai, enforce strict rate limits to prevent bot activity or distributed denial-of-service (DDoS) attacks.
IP-Based Restrictions: Servers may limit requests from a single IP address, which can affect Screaming Frog if you're crawling from a static IP.
User Agent Detection: Some servers apply different rate limits based on the user agent. Screaming Frog's default user agent ("Screaming Frog SEO Spider") may be flagged as a bot and subjected to stricter limits.
High-Traffic Periods: Crawling during peak server usage times can increase the likelihood of hitting rate limits, as the server is already handling significant traffic.
Sensitive Websites: Some websites, especially those with limited resources or strict security policies, are more likely to enforce aggressive rate limiting.
Understanding these causes is the first step to mitigating 429 errors. Let’s explore how to prevent and resolve them.
To ensure a smooth crawling experience and avoid 429 errors, you can implement the following strategies in Screaming Frog. These solutions range from adjusting crawl settings to coordinating with website administrators.
One of the most effective ways to prevent 429 errors is to reduce the aggressiveness of your crawl by lowering the number of concurrent requests (threads) Screaming Frog sends to the server.
How to Do It:
Navigate to Configuration > Speed in Screaming Frog.
Reduce the Max Threads setting. For most websites, setting it to 5–10 threads is a good starting point. For particularly sensitive sites or when encountering frequent 429 errors, try setting it to 1 thread.
Monitor the crawl and adjust as needed to find a balance between speed and server tolerance.
Why It Works: Fewer concurrent requests reduce the server load, making it less likely to trigger rate-limiting mechanisms.
Pro Tip: If you're crawling a large website, reducing threads may slow down the process significantly. Balance this by increasing the Max URLs setting to allow a longer crawl while staying within server limits.
Adding a delay between requests can further reduce the strain on the server, giving it time to process each request before receiving the next.
How to Do It:
Go to Configuration > Speed in Screaming Frog.
Enable the Limit Crawl Speed option and set a delay (e.g., 1–5 seconds) between requests.
For very restrictive servers, you may need to increase the delay further.
Why It Works: Spacing out requests mimics human browsing behavior, reducing the likelihood of being flagged as a bot.
Pro Tip: Combine this with reduced threads for maximum effect on sensitive websites.
Some servers apply different rate limits based on the user agent. By default, Screaming Frog identifies itself as "Screaming Frog SEO Spider," which some servers may flag as a bot and restrict.
How to Do It:
Navigate to Configuration > User-Agent in Screaming Frog.
Experiment with different user agents, such as:
Googlebot: Mimics Google's crawler, which some servers may prioritize.
Bingbot: Similar to Googlebot, but may face different rate limits.
Generic Browser (e.g., Chrome): Simulates a typical user browser, which may bypass bot-specific restrictions.
Alternatively, create a custom user agent (e.g., "MyAgency-SF") if you know the site has specific whitelisting rules.
Why It Works: Servers may apply less stringent rate limits to recognized user agents, especially those associated with major search engines or browsers.
Pro Tip: If you’re crawling a site you own or have a relationship with, ask the site administrator if they whitelist specific user agents and configure Screaming Frog accordingly.
If you’re crawling a website you own or manage, or if you have a relationship with the site’s administrators, you can request to have your IP address whitelisted to bypass rate limits.
How to Do It:
Identify your static IP address (if you’re using a dynamic IP, consider using a VPN or proxy with a fixed IP).
Contact the website administrator or hosting provider and request that your IP be added to their whitelist.
If using a CDN like Cloudflare, you may need to adjust settings in the CDN’s dashboard to allow your IP.
Why It Works: Whitelisting exempts your IP from rate-limiting rules, allowing uninterrupted crawling.
Pro Tip: If you don’t have a static IP, consider using a dedicated crawling server or proxy service with a fixed IP for consistent results.
If rate limiting is tied to your IP address, using a VPN or proxy can help by changing your IP for each crawl session.
How to Do It:
Subscribe to a reputable VPN service or proxy provider.
Configure your network to route Screaming Frog’s traffic through the VPN or proxy.
Rotate IPs if the VPN/proxy service supports it to avoid repeated rate-limiting.
Why It Works: Changing your IP address can bypass IP-based rate limits, especially if the server is blocking your original IP due to excessive requests.
Pro Tip: Be cautious with proxies, as low-quality services may use shared IPs that are already flagged by servers, potentially worsening the issue.
Server load varies throughout the day, with peak times often leading to stricter rate-limiting. Scheduling crawls during off-peak hours can reduce the likelihood of 429 errors.
How to Do It:
Use Screaming Frog’s Scheduler feature (available in the paid version) to run crawls during low-traffic periods, such as late at night or early in the morning (e.g., 2 AM–6 AM in the server’s time zone).
Check the website’s analytics (if accessible) to identify low-traffic periods.
Why It Works: Servers are less likely to enforce strict rate limits when they’re under less strain from other users.
Pro Tip: If you don’t have access to the paid version, manually start crawls during off-peak times to achieve similar results.
If you’re crawling a third-party website and continue to encounter 429 errors despite optimizations, reaching out to the website administrator can be a last resort.
How to Do It:
Identify the site’s contact information (e.g., via the website’s contact page or WHOIS data).
Explain your purpose for crawling (e.g., SEO audit, research) and request adjustments to their rate-limiting policies or whitelisting for your IP or user agent.
Be professional and transparent about your intentions to build trust.
Why It Works: Administrators may be willing to accommodate legitimate crawling needs, especially if you’re performing an SEO audit for their benefit.
Pro Tip: Provide details about your crawl settings (e.g., user agent, crawl speed) to help the administrator make informed adjustments.
For particularly challenging websites, consider these advanced strategies:
Pause and Resume Crawls: If you encounter a 429 error during a crawl, pause the session in Screaming Frog, wait a few minutes (or longer, depending on the server’s rate-limit reset period), and resume. This can help avoid prolonged blocks.
Distributed Crawling: If you have access to multiple IPs or servers, distribute the crawl across them to reduce the load on any single IP. This requires advanced setup and is typically reserved for large-scale projects.
API Access: If the website offers an API, consider using it instead of crawling the front-end. APIs often have higher rate limits or dedicated access tokens. Check with the website administrator for API availability.
Monitor HTTP Headers: Some servers return specific headers (e.g., Retry-After) with 429 responses, indicating how long to wait before sending another request. Use Screaming Frog’s Custom Extraction feature to monitor these headers and adjust your crawl settings dynamically.
To minimize the risk of 429 errors in future crawls, adopt these best practices:
Start Conservative: Begin with low thread counts (e.g., 1–5) and a delay of 1–2 seconds, then gradually increase as you confirm the server’s tolerance.
Test Incrementally: Run small test crawls (e.g., 100 URLs) to gauge the server’s response before launching a full crawl.
Monitor Crawl Logs: Regularly check Screaming Frog’s Response Codes tab for 429 errors and adjust settings immediately if they appear.
Respect Robots.txt: Ensure Screaming Frog is configured to respect the website’s robots.txt file, as ignoring it may trigger stricter rate limits.
Keep Software Updated: Use the latest version of Screaming Frog, as updates often include improvements to crawling efficiency and error handling.
Encountering 429 Too Many Requests errors in Screaming Frog is a common challenge when crawling websites with strict rate-limiting policies. By understanding the causes—such as aggressive crawl settings, IP-based restrictions, or server-side protections—you can take proactive steps to prevent these errors. Adjusting crawl speed, changing user agents, implementing delays, and coordinating with website administrators are all effective strategies to ensure smooth and uninterrupted crawls.
By following the techniques outlined in this guide, you can optimize your Screaming Frog setup to respect server limits while still gathering the SEO data you need. If you’re still facing issues, don’t hesitate to experiment with advanced techniques or seek assistance from the website’s administrators. With the right approach, 429 errors can become a manageable hurdle rather than a roadblock to your SEO efforts.