Delay and throttling are techniques that control how fast your web scraper sends requests to a website. By intentionally slowing down or spacing out your requests, you avoid overwhelming servers and reduce your chances of getting blocked.
What is delay in web scraping?
A request delay is a pause between each HTTP request your scraper makes. Instead of firing off hundreds of requests per second, you add a waiting period, typically between 1 and 5 seconds. This gives the target server breathing room and makes your scraper behave more like a human browsing the site.
You can use fixed delays (the same pause every time) or randomized delays (varying the wait time within a range). Randomized delays are generally better because perfectly regular timing looks suspicious to bot detection systems.
What is throttling?
Throttling takes delay a step further by dynamically adjusting your request rate based on real-time feedback. If the server starts responding slowly or returns error codes, your scraper automatically slows down. When things look healthy again, it can speed back up.
Think of throttling as adaptive cruise control for your scraper. It responds to road conditions instead of driving at a constant speed regardless of traffic.
Why rate limiting matters
Rate limiting is the server-side defense you are trying to work around. Websites set caps on how many requests an IP address or user can make within a specific time window. Common limits might be 60 requests per minute or 1,000 requests per hour.
When you exceed these limits, the server typically responds with:
Understanding rate limits helps you configure your delays and throttling to stay just under the threshold.
Best practices for delay and throttling
Start slow and adjust: Begin with longer delays (3 to 5 seconds) and shorten them if the site handles requests well. It is easier to speed up than to recover from a ban.
Add randomization: Instead of waiting exactly 2 seconds every time, use a range like 1 to 4 seconds. This pattern looks more natural.
Implement exponential backoff: When you hit a 429 or 403 error, do not just retry immediately. Wait longer with each failed attempt, such as 5 seconds, then 10, then 20.
Monitor response times: If pages that normally load in 200 milliseconds start taking 2 seconds, the server is likely stressed. Slow down before you get blocked.
Combine with other techniques: Throttling works best alongside proxy rotation and user agent rotation. Spreading requests across multiple IPs while maintaining reasonable delays gives you the best protection.
Common throttling approaches
ApproachHow it worksBest forFixed delaySame pause between every requestSimple projects, stable sitesRandom delayVaries wait time within a rangeAvoiding detection patternsAdaptive throttlingAdjusts speed based on server responseLarge-scale, long-running scrapersExponential backoffIncreases wait time after each errorRecovering from rate limit hits
How Browse AI handles delay and throttling
Setting up proper delays and throttling logic requires ongoing maintenance, especially as websites update their defenses. Browse AI handles this complexity for you automatically. The platform manages request pacing, adapts to rate limits, and rotates through proxies without requiring any coding. You simply point it at the data you need, and Browse AI figures out the optimal speed to extract it reliably.

