Rank Tracker API Without Blocks: How IPFLY Enables 99.9% Accurate Monitoring

6 Views

If you’ve built or relied on rank tracking infrastructure, you know the pattern: initial success followed by progressive degradation. A few keywords track perfectly, then CAPTCHAs appear. Geographic targeting works for popular markets, then starts returning distorted results. Scale increases, and suddenly entire IP ranges face permanent blocks.

The rank tracker api landscape is littered with partial solutions—tools that work until they don’t, platforms that promise scale but throttle performance, and DIY implementations that collapse under detection pressure. The fundamental issue isn’t technical capability; it’s infrastructure authenticity. When your rank tracking requests originate from identifiable data center IPs, commercial VPN ranges, or known proxy networks, sophisticated search engine protection systems respond with escalating countermeasures.

This is why professional SEO operations—agencies managing hundreds of clients, enterprises tracking millions of keywords, and intelligence platforms serving thousands of users—require rank tracker api infrastructure built on genuine residential network presence. Not as an optimization, but as a prerequisite for reliable operation.

Rank Tracker API Without Blocks: How IPFLY Enables 99.9% Accurate Monitoring

The Architecture of Effective Rank Tracker API Systems

Core Functional Requirements

A production-grade rank tracker api must deliver:

Geographic Precision: Rankings vary dramatically by location. Accurate tracking requires city-level or even neighborhood-level query origination, not approximate regional presence.

Device and Platform Coverage: Desktop and mobile results often diverge significantly. Comprehensive monitoring requires authentic representation of both environments.

SERP Feature Extraction: Modern results extend far beyond ten blue links. Featured snippets, local packs, knowledge panels, People Also Ask, and video carousels all influence visibility and require capture.

Temporal Consistency: Rankings fluctuate throughout the day. Professional tracking requires scheduled, reliable collection that maintains consistent timing without triggering velocity-based detection.

Scale and Concurrency: Enterprise operations track thousands to millions of keywords. Infrastructure must support massive parallel query execution without performance degradation or detection risk.

The Detection Challenge

Search engines deploy sophisticated protection specifically targeting rank tracker api operations:

IP Reputation Analysis: Data center ranges, hosting provider IPs, and known automation infrastructure populate real-time blocklists. Queries from these sources face immediate restriction or misleading results.

Behavioral Fingerprinting: Request timing, header signatures, TLS characteristics, and navigation patterns reveal automation. Even “stealth” configurations often exhibit detectable regularities.

Geographic Consistency Verification: Impossible travel scenarios—queries from multiple continents within minutes—trigger security responses. Distributed collection requires authentic local presence, not VPN-style routing.

Progressive Response Escalation: Initial rate limiting escalates to CAPTCHA challenges, temporary blocks, and permanent blacklisting. Recovery from advanced restrictions requires infrastructure replacement, not configuration adjustment.

IPFLY’s Solution: Residential Infrastructure for Rank Tracker API Excellence

Authentic Network Foundation

IPFLY provides rank tracker api developers with the critical infrastructure layer: 90+ million residential IP addresses across 190+ countries, representing genuine ISP-allocated connections to real consumer and business locations.

This residential foundation transforms rank tracking capability:

Undetectable Query Origination: Requests appear as legitimate user searches from authentic home and office internet connections. Search engine protection systems cannot distinguish professional rank tracking from ordinary consumer behavior.

Geographic Authenticity: City and state-level targeting ensures that rank tracker api queries capture genuine local search results, not VPN-approximated or data-center-distorted responses.

Massive Distribution Capacity: Millions of available IPs enable query distribution that maintains per-address frequencies well below detection thresholds while achieving aggregate collection velocity that enterprise scale requires.

Enterprise-Grade Operational Standards

Professional rank tracker api operations demand reliability:

99.9% Uptime SLA: Continuous monitoring cannot tolerate infrastructure interruptions. IPFLY’s redundant network architecture ensures consistent query capability.

Unlimited Concurrent Processing: From hundreds to millions of simultaneous keyword checks, infrastructure scales without throttling, queuing, or performance degradation.

Millisecond Response Optimization: High-speed backbone connectivity minimizes latency between query initiation and result capture, enabling real-time or near-real-time intelligence delivery.

24/7 Technical Support: Expert assistance for integration optimization, troubleshooting, and scaling guidance—not automated responses or community forums.

Data Quality Assurance

IPFLY enhances rank tracker api accuracy through:

IP Reputation Management: Rigorous filtering ensures that queries utilize only high-purity residential addresses with clean search histories, preventing contamination from previously flagged IPs.

Consistent Result Delivery: Authentic residential access eliminates the personalized distortions, blocking responses, and misleading results that compromise data center-based tracking.

Session Stability Options: Static residential allocations enable persistent local presence for longitudinal tracking, while dynamic rotation provides maximum distribution for broad coverage.

Building Your Rank Tracker API: Technical Implementation

Python Core Implementation

Basic Rank Tracker with IPFLY Integration:

Python

import requests
from bs4 import BeautifulSoup
from urllib.parse import quote_plus, urlencode
from typing import List, Dict, Optional, Union
from dataclasses import dataclass
from datetime import datetime
import time
import random
import json

@dataclassclassRankingResult:
    keyword:str
    position:int
    url:str
    title:str
    description:str
    location:str
    device_type:str
    serp_features: Dict
    timestamp: datetime

classIPFLYRankTracker:"""
    Production-grade rank tracker with IPFLY residential proxy integration.
    """
    
    SEARCH_ENGINES ={'google':'https://www.google.com/search','bing':'https://www.bing.com/search'}def__init__(self, ipfly_config: Dict, engine:str='google'):
        self.ipfly_config = ipfly_config
        self.engine = engine
        self.session = requests.Session()# Configure IPFLY residential proxy
        proxy_url =(f"http://{ipfly_config['username']}:{ipfly_config['password']}"f"@{ipfly_config['host']}:{ipfly_config['port']}")
        self.session.proxies ={'http': proxy_url,'https': proxy_url
        }# Rotate realistic user agents
        self.user_agents =['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:120.0) Gecko/20100101 Firefox/120.0']# Mobile user agents for device-specific tracking
        self.mobile_agents =['Mozilla/5.0 (iPhone; CPU iPhone OS 17_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.1 Mobile/15E148 Safari/604.1','Mozilla/5.0 (Linux; Android 14; SM-S918B) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Mobile Safari/537.36']defconstruct_search_url(
        self,
        keyword:str,
        location:str='us',
        language:str='en',
        start:int=0,
        device_type:str='desktop')->str:"""Build search URL with localization parameters."""
        base_url = self.SEARCH_ENGINES.get(self.engine, self.SEARCH_ENGINES['google'])
        
        params ={'q': quote_plus(keyword),'hl': language,'gl': location,'start': start,'num':100}# Device-specific parametersif device_type =='mobile':
            params['uiv']='mb'# Mobile view indicator for some engines
        
        query_string = urlencode(params)returnf"{base_url}?{query_string}"deftrack_keyword(
        self,
        keyword:str,
        target_domain: Optional[str]=None,
        location:str='us',
        language:str='en',
        device_type:str='desktop',
        pages:int=3)-> List[RankingResult]:"""
        Track keyword rankings with comprehensive result extraction.
        """
        all_results =[]
        target_found =Falsefor page inrange(pages):
            offset = page *100
            
            url = self.construct_search_url(
                keyword, location, language, offset, device_type
            )# Select appropriate user agent
            user_agent =(
                random.choice(self.mobile_agents)if device_type =='mobile'else random.choice(self.user_agents))
            
            headers ={'User-Agent': user_agent,'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8','Accept-Language':f'{language}-{location},{language};q=0.9','Accept-Encoding':'gzip, deflate, br','DNT':'1','Connection':'keep-alive','Upgrade-Insecure-Requests':'1'}try:# Human-like delay with jitter
                time.sleep(random.uniform(3,7))
                
                response = self.session.get(
                    url,
                    headers=headers,
                    timeout=45,
                    allow_redirects=True)
                response.raise_for_status()# Parse results with engine-specific logicif self.engine =='google':
                    page_results = self._parse_google_results(
                        response.text, keyword, location, device_type, offset
                    )else:
                    page_results = self._parse_bing_results(
                        response.text, keyword, location, device_type, offset
                    )# Check for target domainif target_domain:for result in page_results:if target_domain in result.url:
                            target_found =True
                            result.is_target =True
                
                all_results.extend(page_results)# Early termination if target found or no more resultsif target_found orlen(page_results)<10:breakexcept requests.exceptions.RequestException as e:print(f"Request failed for '{keyword}' page {page}: {e}")# IPFLY's rotating proxy automatically handles IP refresh# Implement retry logic with exponential backoff
                time.sleep(random.uniform(10,20))continuereturn all_results
    
    def_parse_google_results(
        self,
        html:str,
        keyword:str,
        location:str,
        device_type:str,
        offset:int)-> List[RankingResult]:"""Parse Google SERP results with feature extraction."""
        soup = BeautifulSoup(html,'html.parser')
        results =[]# Organic results
        result_containers = soup.select('div.g, div[data-header-feature]')for idx, container inenumerate(result_containers):
            position = offset + idx +1try:
                title_elem = container.select_one('h3')
                url_elem = container.select_one('a[href]')
                desc_elem = container.select_one('div.VwiC3b, span.aCOpRe')# Extract SERP features
                features = self._extract_google_features(container)if title_elem and url_elem:
                    result = RankingResult(
                        keyword=keyword,
                        position=position,
                        url=url_elem.get('href',''),
                        title=title_elem.get_text(strip=True),
                        description=desc_elem.get_text(strip=True)if desc_elem else'',
                        location=location,
                        device_type=device_type,
                        serp_features=features,
                        timestamp=datetime.utcnow())
                    results.append(result)except Exception as e:print(f"Parsing error at position {position}: {e}")continue# Extract additional SERP features (PAA, related searches, etc.)
        page_features = self._extract_page_level_features(soup)for result in results:
            result.serp_features.update(page_features)return results
    
    def_extract_google_features(self, container)-> Dict:"""Extract Google-specific SERP features from result container."""
        features ={'type':'organic','has_sitelinks':len(container.select('div.osl a'))>0,'has_image':len(container.select('img.XNo5Ab'))>0,'is_featured_snippet':'xpdopen'in container.get('class',[]),'is_local_pack':False,'is_knowledge_panel':False}# Detect rich result typesif container.select('div.xpdopen'):
            features['type']='featured_snippet'elif container.select('div.g-blk'):
            features['type']='knowledge_panel'elif container.select('div.dbg0pd'):
            features['type']='local_result'
            features['is_local_pack']=Truereturn features
    
    def_extract_page_level_features(self, soup)-> Dict:"""Extract page-level SERP features."""return{'people_also_ask':[
                elem.get_text(strip=True)for elem in soup.select('div.related-question-pair span')],'related_searches':[
                elem.get_text(strip=True)for elem in soup.select('div.AJLUJb a')],'has_local_pack':len(soup.select('div#lclbox'))>0,'has_knowledge_panel':len(soup.select('div.knowledge-panel'))>0,'has_top_stories':len(soup.select('g-section-with-header'))>0}def_parse_bing_results(self, html, keyword, location, device_type, offset):"""Parse Bing SERP results."""# Implementation similar to Google parsing# with Bing-specific selectorspass# Production usage exampleif __name__ =="__main__":
    ipfly_config ={'host':'proxy.ipfly.com','port':'3128','username':'your_ipfly_username','password':'your_ipfly_password'}
    
    tracker = IPFLYRankTracker(ipfly_config, engine='google')# Track keyword with geographic and device precision
    results = tracker.track_keyword(
        keyword="enterprise seo software",
        target_domain="example.com",
        location="us",
        device_type="desktop",
        pages=2)print(f"Found {len(results)} results")for r in results[:10]:
        marker =" [TARGET]"ifgetattr(r,'is_target',False)else""print(f"{r.position}. {r.title[:60]}{marker}")print(f"   {r.url[:70]}...")

Advanced Browser-Based Implementation

For JavaScript-heavy SERPs and comprehensive feature extraction:

Python

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
from webdriver_manager.chrome import ChromeDriverManager
import time
import random

classBrowserBasedRankTracker:"""
    Browser automation rank tracker with IPFLY SOCKS5 proxy and stealth configuration.
    """def__init__(self, ipfly_config: Dict, headless:bool=True):
        self.ipfly_config = ipfly_config
        self.headless = headless
        self.driver =Nonedefinitialize_driver(self):"""Initialize stealth Chrome with IPFLY residential proxy."""
        chrome_options = Options()if self.headless:
            chrome_options.add_argument('--headless')# Essential stability arguments
        chrome_options.add_argument('--no-sandbox')
        chrome_options.add_argument('--disable-dev-shm-usage')
        chrome_options.add_argument('--disable-blink-features=AutomationControlled')
        chrome_options.add_argument('--disable-web-security')
        chrome_options.add_argument('--disable-features=IsolateOrigins,site-per-process')
        chrome_options.add_argument('--disable-site-isolation-trials')# IPFLY SOCKS5 proxy configuration
        socks_proxy =(f"{self.ipfly_config['host']}:{self.ipfly_config.get('socks_port','1080')}")
        chrome_options.add_argument(f'--proxy-server=socks5://{socks_proxy}')# Window size and user agent
        chrome_options.add_argument('--window-size=1920,1080')
        chrome_options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36')# Exclude automation indicators
        chrome_options.add_experimental_option("excludeSwitches",["enable-automation"])
        chrome_options.add_experimental_option('useAutomationExtension',False)# Initialize driver
        service = Service(ChromeDriverManager().install())
        self.driver = webdriver.Chrome(service=service, options=chrome_options)# Execute CDP commands for stealth
        self.driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument',{'source':'''
                Object.defineProperty(navigator, 'webdriver', {
                    get: () => undefined
                });
                Object.defineProperty(navigator, 'plugins', {
                    get: () => [1, 2, 3, 4, 5]
                });
                window.chrome = {
                    runtime: {}
                };
            '''})# Set additional headers via CDP
        self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders',{'headers':{'Accept-Language':'en-US,en;q=0.9','DNT':'1'}})deftrack_with_visual_verification(
        self,
        keyword:str,
        location:str='United States',
        device_type:str='desktop')-> Dict:"""
        Track rankings with full browser rendering and visual feature detection.
        """ifnot self.driver:
            self.initialize_driver()try:# Construct search URL with localization
            search_url =(f"https://www.google.com/search?"f"q={keyword.replace(' ','+')}&"f"hl=en&"f"gl={location[:2].lower()iflen(location)==2else'us'}")# Add mobile parameter if neededif device_type =='mobile':
                search_url +="&uiv=mb"
                
            self.driver.get(search_url)# Wait for results with realistic timing
            wait = WebDriverWait(self.driver,15)
            wait.until(EC.presence_of_element_located((By.ID,"search")))# Random scroll behavior to mimic human interaction
            self._human_like_scroll()# Extract comprehensive results
            organic_results = self._extract_organic_with_selenium()
            serp_features = self._extract_all_serp_features()return{'keyword': keyword,'location': location,'device_type': device_type,'organic_results': organic_results,'serp_features': serp_features,'page_title': self.driver.title,'timestamp': time.time()}except Exception as e:print(f"Tracking failed: {e}")return{'error':str(e)}def_human_like_scroll(self):"""Simulate human scrolling behavior."""try:# Initial pause
            time.sleep(random.uniform(2,4))# Random scroll downfor _ inrange(random.randint(2,5)):
                scroll_amount = random.randint(300,800)
                self.driver.execute_script(f"window.scrollBy(0, {scroll_amount});")
                time.sleep(random.uniform(1,3))# Occasional scroll upif random.random()>0.7:
                self.driver.execute_script("window.scrollBy(0, -400);")
                time.sleep(random.uniform(1,2))except:passdef_extract_organic_with_selenium(self)-> List[Dict]:"""Extract organic results using Selenium for JavaScript-rendered content."""
        results =[]try:
            containers = self.driver.find_elements(By.CSS_SELECTOR,"div.g")for position, container inenumerate(containers,1):try:
                    title = container.find_element(By.CSS_SELECTOR,"h3").text
                    url = container.find_element(By.CSS_SELECTOR,"a").get_attribute("href")# Multiple attempts for description
                    description =""for selector in["div.VwiC3b","span.aCOpRe","div.s3v94d"]:try:
                            description = container.find_element(By.CSS_SELECTOR, selector).text
                            breakexcept:continue
                    
                    results.append({'position': position,'title': title,'url': url,'description': description[:200],'is_featured_snippet':'xpdopen'in container.get_attribute("class")})except Exception as e:continueexcept Exception as e:print(f"Extraction error: {e}")return results
    
    def_extract_all_serp_features(self)-> Dict:"""Extract comprehensive SERP feature set."""
        features ={'featured_snippet': self._extract_featured_snippet(),'people_also_ask': self._extract_paa(),'local_pack': self._extract_local_pack(),'knowledge_panel': self._extract_knowledge_panel(),'top_stories': self._extract_top_stories(),'video_carousel': self._extract_video_carousel(),'image_pack': self._extract_image_pack(),'related_searches': self._extract_related_searches()}return features
    
    def_extract_featured_snippet(self)-> Optional[Dict]:"""Extract featured snippet content."""try:
            snippet = self.driver.find_element(By.CSS_SELECTOR,"div.xpdopen")return{'type':'paragraph',# Simplified - detect actual type'content': snippet.text[:500],'source': snippet.find_element(By.CSS_SELECTOR,"a").get_attribute("href")}except:returnNonedef_extract_paa(self)-> List[Dict]:"""Extract People Also Ask questions and answers."""
        paa_data =[]try:
            questions = self.driver.find_elements(By.CSS_SELECTOR,"div.related-question-pair")for q in questions:try:
                    question = q.find_element(By.CSS_SELECTOR,"span").text
                    # Click to expand and get answer
                    q.click()
                    time.sleep(0.5)
                    answer = q.find_element(By.CSS_SELECTOR,"div").text
                    paa_data.append({'question': question,'answer': answer[:300]})except:continueexcept:passreturn paa_data
    
    def_extract_local_pack(self)-> Optional[List[Dict]]:"""Extract local pack results."""try:
            local_results =[]
            pack = self.driver.find_elements(By.CSS_SELECTOR,"div.dbg0pd")for item in pack:
                local_results.append({'name': item.text,'details':'Extracted from local pack'})return local_results if local_results elseNoneexcept:returnNonedef_extract_knowledge_panel(self)-> Optional[Dict]:"""Extract knowledge panel information."""try:
            panel = self.driver.find_element(By.CSS_SELECTOR,"div.knowledge-panel, div#kp-wp-tab-overview")return{'title': panel.find_element(By.CSS_SELECTOR,"h2, div[role='heading']").text,'entity_type':'detected'}except:returnNonedef_extract_top_stories(self)-> List[Dict]:"""Extract top stories/news results."""
        stories =[]try:
            story_elements = self.driver.find_elements(By.CSS_SELECTOR,"g-section-with-header div[role='listitem']")for elem in story_elements[:3]:
                stories.append({'title': elem.text[:100],'source':'news'})except:passreturn stories
    
    def_extract_video_carousel(self)-> List[Dict]:"""Extract video carousel results."""
        videos =[]try:
            video_elements = self.driver.find_elements(By.CSS_SELECTOR,"g-scrolling-carousel div[role='listitem']")for elem in video_elements[:3]:
                videos.append({'title': elem.text[:100]})except:passreturn videos
    
    def_extract_image_pack(self)-> Optional[Dict]:"""Extract image pack results."""try:
            pack = self.driver.find_element(By.CSS_SELECTOR,"div#imagebox_bigimages")return{'present':True,'count':len(pack.find_elements(By.CSS_SELECTOR,"img"))}except:returnNonedef_extract_related_searches(self)-> List[str]:"""Extract related search queries."""
        related =[]try:
            elements = self.driver.find_elements(By.CSS_SELECTOR,"div.AJLUJb a")for elem in elements:
                related.append(elem.text)except:passreturn related
    
    defclose(self):"""Clean up resources."""if self.driver:
            self.driver.quit()# Production proxy rotation managerclassIPFLYRotationManager:"""
    Manages IPFLY proxy rotation for high-volume rank tracking operations.
    """def__init__(self, ipfly_accounts: List[Dict]):
        self.accounts = ipfly_accounts
        self.current_idx =0
        self.failure_counts ={i:0for i inrange(len(accounts))}
        self.success_counts ={i:0for i inrange(len(accounts))}defget_optimal_proxy(self, target_location:str='us')-> Dict:"""Select best proxy based on performance history and location match."""
        candidates =[]for idx, account inenumerate(self.accounts):# Skip high-failure IPsif self.failure_counts[idx]>10:continue# Prioritize location match
            location_match = account.get('location','us')== target_location
            score = self.success_counts[idx]-(self.failure_counts[idx]*2)if location_match:
                score +=5
                
            candidates.append((idx, score, account))ifnot candidates:# Reset and use all
            self.failure_counts ={i:0for i inrange(len(self.accounts))}
            candidates =[(i,0, acc)for i, acc inenumerate(self.accounts)]# Select highest scoring candidate
        best =max(candidates, key=lambda x: x[1])return best[2]defreport_result(self, proxy_idx:int, success:bool):"""Update performance tracking."""if success:
            self.success_counts[proxy_idx]+=1
            self.failure_counts[proxy_idx]=max(0, self.failure_counts[proxy_idx]-1)else:
            self.failure_counts[proxy_idx]+=1

FastAPI Production Service

Deploy your rank tracker api as scalable web service:

Python

from fastapi import FastAPI, HTTPException, Depends, BackgroundTasks, Query
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from pydantic import BaseModel, Field, validator
from typing import List, Optional, Dict, Union
from datetime import datetime, timedelta
from enum import Enum
import asyncio
import aioredis
import json
import os
import time

app = FastAPI(
    title="Rank Tracker API",
    description="Production-grade SEO rank tracking with IPFLY residential proxies",
    version="2.0.0",
    docs_url="/docs",
    redoc_url="/redoc")

security = HTTPBearer()# Redis connection for caching and queuesasyncdefget_redis():
    redis =await aioredis.from_url(
        os.getenv("REDIS_URL","redis://localhost:6379"),
        decode_responses=True)return redis

classSearchEngine(str, Enum):
    google ="google"
    bing ="bing"classDeviceType(str, Enum):
    desktop ="desktop"
    mobile ="mobile"
    tablet ="tablet"classTrackingRequest(BaseModel):
    keywords: List[str]= Field(..., min_items=1, max_items=1000)
    target_domain: Optional[str]= Field(None, description="Domain to highlight in results")
    locations: List[str]= Field(default=["us"], max_items=50)
    device_types: List[DeviceType]= Field(default=[DeviceType.desktop])
    search_engine: SearchEngine = Field(default=SearchEngine.google)
    pages_per_keyword:int= Field(default=2, ge=1, le=10)
    priority:int= Field(default=1, ge=1, le=5)@validator('keywords')defvalidate_keywords(cls, v):ifany(len(k)>500for k in v):raise ValueError("Individual keywords must be under 500 characters")return v

classRankingData(BaseModel):
    keyword:str
    position:int
    url:str
    title:str
    description:str
    location:str
    device_type:str
    serp_features: Dict
    is_target:bool=False
    timestamp: datetime

classTrackingResponse(BaseModel):
    job_id:str
    status:str
    keywords_submitted:int
    estimated_completion:str
    check_status_url:strclassJobStatus(BaseModel):
    job_id:str
    status:str# queued, processing, completed, failed
    progress_percent:float
    results_count:int
    errors: List[str]
    completed_at: Optional[datetime]defverify_api_key(credentials: HTTPAuthorizationCredentials = Depends(security)):"""Verify API authentication."""# Implement proper key validationif credentials.credentials != os.getenv("API_KEY"):raise HTTPException(status_code=401, detail="Invalid API key")return credentials.credentials

defget_ipfly_pool():"""Load IPFLY proxy pool configuration."""# Load from secure configurationreturn[{'host': os.getenv(f'IPFLY_HOST_{i}','proxy.ipfly.com'),'port': os.getenv(f'IPFLY_PORT_{i}','3128'),'username': os.getenv(f'IPFLY_USER_{i}'),'password': os.getenv(f'IPFLY_PASS_{i}'),'location': os.getenv(f'IPFLY_LOC_{i}','us')}for i inrange(int(os.getenv('IPFLY_POOL_SIZE','10')))]@app.post("/track", response_model=TrackingResponse)asyncdefsubmit_tracking_job(
    request: TrackingRequest,
    background_tasks: BackgroundTasks,
    api_key:str= Depends(verify_api_key)):"""
    Submit keywords for rank tracking across specified locations and devices.
    """
    job_id =f"track_{int(time.time())}_{hash(str(request.keywords))}"# Calculate estimated completion
    total_queries =(len(request.keywords)*len(request.locations)*len(request.device_types)* 
        request.pages_per_keyword
    )
    estimated_minutes =max(1, total_queries /100)# Approximate rate# Queue job for processing
    redis =await get_redis()await redis.setex(f"job:{job_id}",86400,# 24 hour TTL
        json.dumps({'status':'queued','request': request.dict(),'submitted_at': datetime.utcnow().isoformat(),'total_queries': total_queries
        }))# Trigger background processing
    background_tasks.add_task(process_tracking_job, job_id, request)return TrackingResponse(
        job_id=job_id,
        status="queued",
        keywords_submitted=len(request.keywords),
        estimated_completion=f"{estimated_minutes:.1f} minutes",
        check_status_url=f"/status/{job_id}")asyncdefprocess_tracking_job(job_id:str, request: TrackingRequest):"""Background processing of rank tracking job."""
    redis =await get_redis()
    ipfly_pool = get_ipfly_pool()try:await redis.hset(f"job:{job_id}","status","processing")
        
        tracker = IPFLYRankTracker(ipfly_pool[0], engine=request.search_engine.value)
        
        all_results =[]
        completed =0
        total =len(request.keywords)*len(request.locations)*len(request.device_types)for keyword in request.keywords:for location in request.locations:for device in request.device_types:try:
                        results = tracker.track_keyword(
                            keyword=keyword,
                            target_domain=request.target_domain,
                            location=location,
                            device_type=device.value,
                            pages=request.pages_per_keyword
                        )
                        
                        all_results.extend([r.dict()for r in results])
                        completed +=1# Update progress
                        progress =(completed / total)*100await redis.hset(f"job:{job_id}","progress",str(progress))await redis.hset(f"job:{job_id}","results_count",str(len(all_results)))# Brief delay between queriesawait asyncio.sleep(random.uniform(2,5))except Exception as e:await redis.lpush(f"job:{job_id}:errors",str(e))# Store final resultsawait redis.setex(f"results:{job_id}",604800,# 7 day retention
            json.dumps(all_results))await redis.hset(f"job:{job_id}","status","completed")await redis.hset(f"job:{job_id}","completed_at", datetime.utcnow().isoformat())except Exception as e:await redis.hset(f"job:{job_id}","status","failed")await redis.hset(f"job:{job_id}","error",str(e))@app.get("/status/{job_id}", response_model=JobStatus)asyncdefcheck_job_status(
    job_id:str,
    api_key:str= Depends(verify_api_key)):"""Check status of submitted tracking job."""
    redis =await get_redis()
    
    status_data =await redis.hgetall(f"job:{job_id}")ifnot status_data:raise HTTPException(status_code=404, detail="Job not found")
    
    errors =await redis.lrange(f"job:{job_id}:errors",0,-1)return JobStatus(
        job_id=job_id,
        status=status_data.get('status','unknown'),
        progress_percent=float(status_data.get('progress',0)),
        results_count=int(status_data.get('results_count',0)),
        errors=errors or[],
        completed_at=status_data.get('completed_at'))@app.get("/results/{job_id}")asyncdefget_job_results(
    job_id:str,format:str= Query(default="json", regex="^(json|csv)$"),
    api_key:str= Depends(verify_api_key)):"""Retrieve completed tracking results."""
    redis =await get_redis()
    
    results_json =await redis.get(f"results:{job_id}")ifnot results_json:raise HTTPException(status_code=404, detail="Results not found or expired")
    
    results = json.loads(results_json)ifformat=="csv":# Convert to CSVimport csv
        import io
        
        output = io.StringIO()if results:
            writer = csv.DictWriter(output, fieldnames=results[0].keys())
            writer.writeheader()
            writer.writerows(results)from fastapi.responses import PlainTextResponse
        return PlainTextResponse(
            content=output.getvalue(),
            media_type="text/csv",
            headers={"Content-Disposition":f"attachment; filename=rankings_{job_id}.csv"})return{"job_id": job_id,"results": results,"count":len(results)}@app.get("/health")asyncdefhealth_check():"""Service health status."""
    redis =await get_redis()
    redis_ok =await redis.ping()return{"status":"healthy","redis":"connected"if redis_ok else"disconnected","ipfly_pool_size":len(get_ipfly_pool()),"timestamp": datetime.utcnow()}if __name__ =="__main__":import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

IPFLY Integration: Ensuring Rank Tracker API Success

Why Residential Proxies Are Non-Negotiable

Search engine protection systems specifically target rank tracker api infrastructure:

Detection Method Data Center Impact IPFLY Residential Evasion
IP range blacklists Immediate blocking Authentic ISP allocation undetected
Velocity analysis Rate limiting, CAPTCHAs Distributed across millions of IPs
Behavioral fingerprinting Automation flags Genuine consumer patterns
Geographic impossibility Account suspension Authentic local presence
TLS/JA3 fingerprinting Proxy identification Standard browser signatures

Configuration Optimization

Python

# IPFLY configuration for rank tracking excellenceclassIPFLYRankConfig:"""
    Optimized IPFLY configurations for different rank tracking scenarios.
    """# High-frequency tracking: Maximum distribution
    HIGH_VOLUME ={'type':'rotating','rotation_policy':'per_request','countries':['us','gb','ca','au','de','fr'],'session_duration':0}# Local SEO tracking: Geographic precision
    LOCAL_SEO ={'type':'static','rotation_policy':'daily','city_targeting':True,'session_duration':86400# 24 hours}# Competitive monitoring: Persistent identity
    COMPETITIVE_INTEL ={'type':'static','rotation_policy':'weekly','sticky_sessions':True,'session_duration':604800# 7 days}# Mobile tracking: Device-appropriate networks
    MOBILE_TRACKING ={'type':'rotating','mobile_isp_pool':True,'carrier_simulation':True}
Rank Tracker API Without Blocks: How IPFLY Enables 99.9% Accurate Monitoring

Production-Grade Rank Tracker API Infrastructure

Building a rank tracker api that delivers reliable, accurate, and scalable SEO intelligence requires combining technical implementation excellence with infrastructure that ensures consistent, undetectable access. IPFLY’s residential proxy network provides this foundation—authentic ISP-allocated addresses, massive global scale, and enterprise-grade reliability that transforms rank tracking from fragile experimentation into robust operational capability.

For organizations committed to data-driven SEO excellence, IPFLY enables rank tracker api development that matches professional requirements: geographic precision, comprehensive feature extraction, unlimited scale, and detection resistance that maintains data integrity regardless of target sophistication.

END
 0