In September 2025, a quiet change to Google Search sent shockwaves through the SEO and web scraping industries. Without any official announcement or documentation, Google permanently disabled the &num=100 URL parameter, which had allowed users to view 100 search results on a single page for over 15 years.
Dubbed the “Googlopocalypse” by SEO specialists, this change has made data collection from Google Search 10x slower and more expensive. What was once a single request now requires 10 separate requests to gather the same 100 results. For businesses that rely on SERP data for rank tracking, competitor analysis and market research, this has forced a complete overhaul of their workflows.
In this guide, we’ll break down exactly what changed, why Google removed the &num=100 parameter, how it has affected different industries, and the only reliable working methods to collect top-100 search results in 2026.

What Was &num=100 and Why It Mattered
The &num= parameter was an undocumented but widely used feature of Google Search that allowed users to control how many results appeared on each page. By adding &num=100 to the end of any search URL, you could view the top 100 results for any query on a single page, instead of the default 10.
For over a decade, this parameter was the foundation of almost all SERP scraping and SEO tools. It offered three unbeatable advantages:
1.Massive time savings: Collecting 100 results in one request instead of 10 reduced scraping time by 90%
2.Lower resource usage: Fewer requests meant less server load, less bandwidth and fewer CAPTCHAs
3.Simpler data extraction: A flat, unified list of 100 results made parsing and analysis straightforward
Millions of SEO specialists, data analysts and developers relied on &num=100 every day. It was so ubiquitous that almost every SERP tool on the market was built around it.
The Timeline of the Shutdown
The removal of &num=100 was rolled out gradually over a two-week period:
- September 10-11, 2025: Initial A/B testing begins. Some users in the US and Europe see the parameter stop working, while others still have access. The first reports appear on X and SEO forums.
- September 12-13, 2025: The rollout expands to all English-language regions. Major SEO tools start reporting widespread outages and data gaps.
- September 14, 2025: The change is fully deployed globally across all languages and regions. The &num=100 parameter no longer works for any user, and Google ignores all values other than 10.
Google has never officially commented on the change, and there is no indication that the parameter will ever be restored.
Why Google Removed &num=100
While Google hasn’t explained its decision, there are four clear reasons for the change:
1.Mobile-first focus: Over 70% of Google searches now happen on mobile devices, where scrolling through 100 results on a single page is impractical.
2.Increased ad revenue: Fewer results per page means more space for ads and other monetized SERP elements like Local Packs and Shopping ads.
3.AI search push: Google is shifting its focus to AI-generated answers, which are designed to replace long lists of blue links. The &num=100 parameter was incompatible with this new direction.
4.Anti-scraping measure: The parameter made it extremely easy to scrape large amounts of SERP data at scale. Removing it significantly increases the cost and complexity of scraping Google.
The Immediate Impact
The removal of &num=100 has had far-reaching consequences across multiple industries:
- SEO tools: Almost all rank tracking and SERP analysis tools announced immediate price increases of 30-100% to cover the increased cost of data collection.
- Scraping operations: CAPTCHA rates increased by 300% overnight, as scrapers were forced to send 10x more requests to Google.
- Website traffic: Many sites ranking in positions 11-100 saw a 20-40% drop in organic traffic, as users almost never click past the first page of results anymore.
- Long-tail visibility: Long-tail keywords and niche content have effectively been hidden from most users, as they rarely appear in the top 10 results.
The Only Working Basic Workaround: The &start= Parameter
There is no direct replacement for &num=100, but you can still collect the top 100 results by using Google’s pagination parameter &start=.
This parameter specifies the starting position of the results on the page. For example:
&start=0returns results 1-10&start=10returns results 11-20&start=90returns results 91-100
To collect the top 100 results, you simply loop through 10 pages, incrementing the &start parameter by 10 each time. Here’s a basic Python implementation:
python
import requests
from urllib.parse import quote_plus
def get_google_top_100(query):
results = []
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"}for page in range(10):
start = page * 10
url = f"https://www.google.com/search?q={quote_plus(query)}&start={start}&hl=en"# Add proxy configuration here# proxies = {"http": "your-ipfly-proxy", "https": "your-ipfly-proxy"}# response = requests.get(url, headers=headers, proxies=proxies)
response = requests.get(url, headers=headers)# Your parsing logic hereprint(f"Fetched page {page+1}: {url}")# Add results to the list# results.extend(parsed_results)# Add a delay between requests to avoid blocksimport time
time.sleep(1.5)return results
# Usage
get_google_top_100("best wireless headphones 2026")
Critical Limitation of the Basic Method
While the &start= parameter works in theory, it has one major flaw: sending 10 consecutive requests from the same IP address will almost always trigger Google’s anti-bot systems, resulting in CAPTCHAs or temporary IP bans.
This is where high-quality proxies become essential. With 10x more requests being sent to Google, you need to distribute your traffic across thousands of unique IP addresses to avoid being flagged.
IPFLY’s global pool of 10+ million residential IPs is perfectly suited for this task. You can configure automatic rotation on every request, ensuring that no single IP sends more than one search query. This mimics the behavior of real human users and drastically reduces CAPTCHA rates, allowing you to collect SERP data reliably at scale.
Common Pitfalls to Avoid
- Hardcoding result counts: Google may return fewer than 10 results per page for some queries, especially if you’re not logged in. Always count the actual results returned instead of assuming 10 per page.
- Ignoring dynamic elements: Modern SERPs are filled with dynamic content like People Also Ask boxes, videos and AI overviews that don’t follow the standard organic result format.
- Scraping too fast: Even with proxies, sending requests too quickly will trigger anti-bot systems. Add random delays between 1-3 seconds per request.

The removal of Google’s &num=100 parameter was a seismic shift for anyone who works with SERP data. While it has made scraping more complex and expensive, it is still possible to collect the top 100 results reliably using the &start= parameter combined with high-quality rotating proxies.
The key to success in this new environment is to adapt your workflows to the new reality, invest in reliable infrastructure, and follow best practices for ethical scraping. In our next guide, we’ll dive deeper into advanced scraping techniques that can handle modern Google’s dynamic content and strict anti-bot systems.