列表爬行策略:如何在不被阻止的情況下收集大量數據

11次閱讀
列表爬行策略:如何在不被阻止的情況下收集大量數據

在當今數據驅動的商業環境中,從網站高效提取結構化信息的能力已經成爲關鍵的競爭優勢。列表爬取代表了網絡數據採集中最有價值但技術上最具挑戰性的方面之一,使組織能夠大規模收集產品目錄、目錄列表、定價信息和競爭對手情報。

什麼是列表爬行?

列表抓取是指從以列表格式呈現信息的網頁中提取結構化數據的系統過程——產品目錄、搜索結果、目錄列表、定價表、庫存數據庫和類似的有組織的內容。與可能針對不同內容類型的一般網絡抓取不同,列表抓取特別關注從重複的結構化頁面佈局中有效導航和提取數據。

該技術包括識別網站如何組織基於列表的內容的模式,然後自動提取單個項目及其相關屬性。一個產品列表頁面可能會顯示數百個項目,每個項目都有名稱、價格、描述和可用性。列表爬取系統地在所有頁面上捕獲這種結構化信息。

列表爬行操作的剖析

有效的列表抓取需要了解三個核心組件:頁面導航、數據提取和模式識別。頁面導航處理在分頁結果或無限滾動實現中移動。數據提取從每個列表項中識別和捕獲特定數據點。模式識別確保跨不同頁面結構的一致提取。

導航組件必須處理各種分頁機制。傳統的編號頁面鏈接、“下一頁”按鈕、無限滾動加載和基於API的動態內容都需要不同的技術方法。健壯的列表爬蟲適應每個目標網站採用的特定實現。

數據提取依賴於識別一致的超文本標記語言結構或定義列表項及其屬性的CSS選擇器。現代網站通常使用標準化框架來創建可預測的模式,儘管許多網站實現定製結構需要仔細分析才能解碼。

爲什麼企業需要列表爬行功能

各行各業的組織依靠列表爬取來收集競爭情報、監控市場、優化運營和做出數據驅動的決策。大規模收集結構化數據的能力開啓了許多戰略機會。

電子商務的成功越來越依賴於實時競品分析提供的動態定價策略。列表爬行自動對競爭對手的整個目錄進行全面的價格監控,揭示定價策略並識別市場定位機會。

市場研究需要關於可用產品、新興類別和不斷變化的消費者偏好的全面數據。列表爬取可以從市場、零售商和行業目錄中系統地收集這種情報。

列表爬行的技術基礎

瞭解支持有效列表抓取的技術基礎設施有助於組織實施強大、可擴展的解決方案,以避免常見的陷阱。

識別列表結構

成功的列表抓取始於分析目標網站如何構建其基於列表的內容。大多數網站使用一致的超文本標記語言模式來重複元素。

from bs4 import BeautifulSoup
import requests

# Example: Identifying list structure
html = requests.get('https://example.com/products').text
soup = BeautifulSoup(html, 'html.parser')

# Find container for product list
product_list = soup.find('div', class_='product-grid')

# Identify individual product items
products = product_list.find_all('div', class_='product-item')

print(f"Found {len(products)} products")

# Examine structure of first product
first_product = products[0]
print(first_product.prettify())

容器元素通常使用一致的類名或超文本標記語言包裝每個列表項。在容器中,單個屬性顯示在具有可識別選擇器的可預測位置。

分頁導航策略

列表爬取從根本上依賴於在分頁結果中導航。導航策略必須全面覆蓋所有可用頁面,同時避免重複提取。

import requests
from bs4 import BeautifulSoup
import time

def crawl_paginated_list(base_url, max_pages=None):
    """Crawl through paginated product listings"""
    page = 1
    all_products = []
    
    while True:
        # Construct page URL
        url = f"{base_url}?page={page}"
        print(f"Crawling page {page}: {url}")
        
        response = requests.get(url)
        if response.status_code != 200:
            print(f"Failed to fetch page {page}")
            break
            
        soup = BeautifulSoup(response.text, 'html.parser')
        
        # Extract products from current page
        products = soup.find_all('div', class_='product-item')
        
        if not products:
            print("No more products found")
            break
            
        all_products.extend(products)
        print(f"Extracted {len(products)} products from page {page}")
        
        # Check for next page
        next_button = soup.find('a', class_='next-page')
        if not next_button or (max_pages and page >= max_pages):
            break
            
        page += 1
        time.sleep(2)  # Rate limiting
    
    return all_products

# Usage
products = crawl_paginated_list('https://example.com/products', max_pages=5)
print(f"Total products crawled: {len(products)}")

IPFLY具有無限併發的住宅代理可實現大規模高效的分頁導航。通過將請求分佈在9000萬多個住宅IP上,爬蟲可以同時處理多個分頁路徑,而不會觸發速率限制。

數據提取和解析

一旦導航到列表頁面,爬蟲必須準確地從每個項目中提取目標數據。

def extract_product_data(product_element):
    """Extract structured data from product element"""
    try:
        # Extract product name
        name_elem = product_element.find('h3', class_='product-name')
        name = name_elem.text.strip() if name_elem else None
        
        # Extract price
        price_elem = product_element.find('span', class_='price')
        price = None
        if price_elem:
            price_text = price_elem.text.strip()
            # Remove currency symbols and convert to float
            price = float(price_text.replace('$', '').replace(',', ''))
        
        # Extract rating
        rating_elem = product_element.find('div', class_='rating')
        rating = None
        if rating_elem:
            rating = float(rating_elem.get('data-rating', 0))
        
        # Extract availability
        stock_elem = product_element.find('span', class_='stock-status')
        in_stock = stock_elem and 'in-stock' in stock_elem.get('class', [])
        
        # Extract image URL
        img_elem = product_element.find('img', class_='product-image')
        image_url = img_elem.get('src') if img_elem else None
        
        # Extract product URL
        link_elem = product_element.find('a', class_='product-link')
        product_url = link_elem.get('href') if link_elem else None
        
        return {
            'name': name,
            'price': price,
            'rating': rating,
            'in_stock': in_stock,
            'image_url': image_url,
            'product_url': product_url
        }
    except Exception as e:
        print(f"Error extracting product data: {e}")
        return None

# Extract data from all products
products_data = []
for product in products:
    data = extract_product_data(product)
    if data:
        products_data.append(data)

# Display results
import json
print(json.dumps(products_data[:3], indent=2))

速率限制和請求管理

網站通過速率限制和機器人檢測防止激進的抓取。成功的列表爬行在不觸發塊的情況下導航這些保護。

import time
import random
from datetime import datetime

class RateLimiter:
    """Manage request rate limiting"""
    
    def __init__(self, requests_per_second=2):
        self.requests_per_second = requests_per_second
        self.min_interval = 1.0 / requests_per_second
        self.last_request_time = 0
    
    def wait(self):
        """Wait appropriate time before next request"""
        current_time = time.time()
        time_since_last = current_time - self.last_request_time
        
        if time_since_last < self.min_interval:
            sleep_time = self.min_interval - time_since_last
            # Add small random variation
            sleep_time += random.uniform(0, 0.5)
            time.sleep(sleep_time)
        
        self.last_request_time = time.time()

# Usage
rate_limiter = RateLimiter(requests_per_second=2)

for page in range(1, 11):
    rate_limiter.wait()
    response = requests.get(f'https://example.com/products?page={page}')
    print(f"[{datetime.now().strftime('%H:%M:%S')}] Fetched page {page}")

IPFLY的動態住宅代理通過在大量IP池中輪換來解決併發挑戰。操作可以保持高併發水平,而每個單獨的IP地址只產生與合法用戶行爲一致的適度請求量。

列表爬行策略:如何在不被阻止的情況下收集大量數據

電子商務智能的列表爬行

電子商務是列表爬取最有價值的應用之一,可以進行全面的競爭分析和市場瞭解。

產品目錄提取

import requests
from bs4 import BeautifulSoup
import csv
from urllib.parse import urljoin

class ProductCatalogCrawler:
    """Crawl and extract complete product catalogs"""
    
    def __init__(self, base_url, output_file='products.csv'):
        self.base_url = base_url
        self.output_file = output_file
        self.products = []
    
    def crawl_category(self, category_url):
        """Crawl all products in a category"""
        page = 1
        
        while True:
            url = f"{category_url}?page={page}"
            print(f"Crawling {url}")
            
            try:
                response = requests.get(url, timeout=10)
                response.raise_for_status()
            except requests.RequestException as e:
                print(f"Error fetching {url}: {e}")
                break
            
            soup = BeautifulSoup(response.text, 'html.parser')
            products = soup.find_all('div', class_='product-card')
            
            if not products:
                break
            
            for product in products:
                product_data = self.extract_product(product)
                if product_data:
                    self.products.append(product_data)
            
            print(f"Extracted {len(products)} products from page {page}")
            
            # Check for next page
            if not soup.find('a', rel='next'):
                break
            
            page += 1
            time.sleep(random.uniform(2, 4))
        
        return len(self.products)
    
    def extract_product(self, element):
        """Extract product details"""
        try:
            name = element.find('h3').text.strip()
            
            price_elem = element.find('span', class_='price')
            price = price_elem.text.strip() if price_elem else 'N/A'
            
            link = element.find('a')
            url = urljoin(self.base_url, link['href']) if link else None
            
            # Extract SKU if available
            sku_elem = element.find('span', class_='sku')
            sku = sku_elem.text.strip() if sku_elem else None
            
            return {
                'name': name,
                'price': price,
                'url': url,
                'sku': sku
            }
        except Exception as e:
            print(f"Error extracting product: {e}")
            return None
    
    def save_to_csv(self):
        """Save extracted products to CSV"""
        if not self.products:
            print("No products to save")
            return
        
        with open(self.output_file, 'w', newline='', encoding='utf-8') as f:
            writer = csv.DictWriter(f, fieldnames=self.products[0].keys())
            writer.writeheader()
            writer.writerows(self.products)
        
        print(f"Saved {len(self.products)} products to {self.output_file}")

# Usage
crawler = ProductCatalogCrawler('https://example.com')
crawler.crawl_category('https://example.com/electronics')
crawler.save_to_csv()

定價和促銷跟蹤

import requests
from bs4 import BeautifulSoup
from datetime import datetime
import sqlite3

class PriceTracker:
    """Track product prices over time"""
    
    def __init__(self, db_name='prices.db'):
        self.db_name = db_name
        self.init_database()
    
    def init_database(self):
        """Initialize SQLite database"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS products (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                name TEXT,
                url TEXT UNIQUE,
                sku TEXT
            )
        ''')
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS prices (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                product_id INTEGER,
                price REAL,
                currency TEXT,
                timestamp DATETIME,
                on_sale BOOLEAN,
                FOREIGN KEY (product_id) REFERENCES products(id)
            )
        ''')
        
        conn.commit()
        conn.close()
    
    def track_product(self, url):
        """Track price for a product"""
        try:
            response = requests.get(url)
            soup = BeautifulSoup(response.text, 'html.parser')
            
            # Extract product details
            name = soup.find('h1', class_='product-title').text.strip()
            
            price_elem = soup.find('span', class_='current-price')
            price_text = price_elem.text.strip().replace('$', '')
            price = float(price_text)
            
            # Check if on sale
            sale_badge = soup.find('span', class_='sale-badge')
            on_sale = sale_badge is not None
            
            # Save to database
            self.save_price(url, name, price, 'USD', on_sale)
            
            print(f"Tracked: {name} - ${price} {'(ON SALE)' if on_sale else ''}")
            
        except Exception as e:
            print(f"Error tracking {url}: {e}")
    
    def save_price(self, url, name, price, currency, on_sale):
        """Save price data to database"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        # Insert or get product
        cursor.execute(
            'INSERT OR IGNORE INTO products (name, url) VALUES (?, ?)',
            (name, url)
        )
        
        cursor.execute('SELECT id FROM products WHERE url = ?', (url,))
        product_id = cursor.fetchone()[0]
        
        # Insert price record
        cursor.execute('''
            INSERT INTO prices (product_id, price, currency, timestamp, on_sale)
            VALUES (?, ?, ?, ?, ?)
        ''', (product_id, price, currency, datetime.now(), on_sale))
        
        conn.commit()
        conn.close()
    
    def get_price_history(self, url):
        """Get price history for a product"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        cursor.execute('''
            SELECT p.timestamp, p.price, p.on_sale
            FROM prices p
            JOIN products pr ON p.product_id = pr.id
            WHERE pr.url = ?
            ORDER BY p.timestamp DESC
        ''', (url,))
        
        history = cursor.fetchall()
        conn.close()
        
        return history

# Usage
tracker = PriceTracker()

products_to_track = [
    'https://example.com/product1',
    'https://example.com/product2'
]

for url in products_to_track:
    tracker.track_product(url)
    time.sleep(2)

克服列表爬行挑戰

列表爬取面臨許多技術和戰略挑戰,需要複雜的解決方案才能取得一致的成功。

處理動態內容

許多現代網站使用JavaScript動態加載內容。傳統請求不執行JavaScript,缺少動態加載的數據。

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options

class DynamicListCrawler:
    """Crawl dynamically loaded lists using Selenium"""
    
    def __init__(self, headless=True):
        chrome_options = Options()
        if headless:
            chrome_options.add_argument('--headless')
        chrome_options.add_argument('--no-sandbox')
        chrome_options.add_argument('--disable-dev-shm-usage')
        
        self.driver = webdriver.Chrome(options=chrome_options)
        self.wait = WebDriverWait(self.driver, 10)
    
    def crawl_infinite_scroll(self, url, max_scrolls=10):
        """Crawl pages with infinite scroll"""
        self.driver.get(url)
        products = []
        
        for scroll in range(max_scrolls):
            # Wait for products to load
            self.wait.until(
                EC.presence_of_all_elements_located(
                    (By.CLASS_NAME, 'product-item')
                )
            )
            
            # Extract currently visible products
            elements = self.driver.find_elements(By.CLASS_NAME, 'product-item')
            
            for element in elements:
                try:
                    name = element.find_element(By.CLASS_NAME, 'name').text
                    price = element.find_element(By.CLASS_NAME, 'price').text
                    products.append({'name': name, 'price': price})
                except Exception as e:
                    continue
            
            # Scroll to bottom
            self.driver.execute_script(
                'window.scrollTo(0, document.body.scrollHeight);'
            )
            
            # Wait for new content to load
            time.sleep(2)
            
            print(f"Scroll {scroll + 1}: {len(products)} products total")
        
        return products
    
    def close(self):
        """Close browser"""
        self.driver.quit()

# Usage
crawler = DynamicListCrawler(headless=True)
products = crawler.crawl_infinite_scroll('https://example.com/products')
crawler.close()

print(f"Crawled {len(products)} products")

防刮措施

import requests
from fake_useragent import UserAgent
import random

class StealthCrawler:
    """Implement stealth techniques to avoid detection"""
    
    def __init__(self):
        self.ua = UserAgent()
        self.session = requests.Session()
    
    def get_headers(self):
        """Generate realistic headers"""
        return {
            'User-Agent': self.ua.random,
            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
            'Accept-Language': 'en-US,en;q=0.5',
            'Accept-Encoding': 'gzip, deflate, br',
            'DNT': '1',
            'Connection': 'keep-alive',
            'Upgrade-Insecure-Requests': '1'
        }
    
    def crawl_with_delays(self, urls):
        """Crawl with random delays"""
        results = []
        
        for url in urls:
            # Random delay between requests
            delay = random.uniform(2, 5)
            time.sleep(delay)
            
            try:
                response = self.session.get(
                    url,
                    headers=self.get_headers(),
                    timeout=10
                )
                
                if response.status_code == 200:
                    results.append(response.text)
                    print(f"✓ Crawled {url}")
                else:
                    print(f"✗ Failed {url}: Status {response.status_code}")
                    
            except Exception as e:
                print(f"✗ Error {url}: {e}")
        
        return results

# Usage with IPFLY proxies for enhanced stealth
proxies = {
    'http': 'http://username:password@proxy.ipfly.com:8080',
    'https': 'http://username:password@proxy.ipfly.com:8080'
}

crawler = StealthCrawler()
# Add proxy to session
crawler.session.proxies.update(proxies)

IPFLY擁有超過9000萬IP的住宅代理能夠實現分佈式爬行,看起來像來自不同地理位置的合法流量,繞過專門針對已知代理範圍的防刮擦措施。

縮放列表爬行操作

從小規模實驗轉移到生產系統需要架構考慮。

分佈式爬行架構

from concurrent.futures import ThreadPoolExecutor, as_completed
import queue
import threading

class DistributedCrawler:
    """Distribute crawling across multiple workers"""
    
    def __init__(self, max_workers=10):
        self.max_workers = max_workers
        self.url_queue = queue.Queue()
        self.results = []
        self.lock = threading.Lock()
    
    def add_urls(self, urls):
        """Add URLs to crawling queue"""
        for url in urls:
            self.url_queue.put(url)
    
    def crawl_url(self, url):
        """Crawl single URL"""
        try:
            response = requests.get(url, timeout=10)
            soup = BeautifulSoup(response.text, 'html.parser')
            
            products = soup.find_all('div', class_='product-item')
            extracted = []
            
            for product in products:
                name = product.find('h3').text.strip()
                price = product.find('span', class_='price').text.strip()
                extracted.append({'name': name, 'price': price})
            
            return extracted
            
        except Exception as e:
            print(f"Error crawling {url}: {e}")
            return []
    
    def start_crawling(self):
        """Start distributed crawling"""
        with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
            futures = []
            
            while not self.url_queue.empty():
                url = self.url_queue.get()
                future = executor.submit(self.crawl_url, url)
                futures.append(future)
            
            for future in as_completed(futures):
                result = future.result()
                with self.lock:
                    self.results.extend(result)
                    print(f"Processed batch: {len(result)} products")
        
        return self.results

# Usage
crawler = DistributedCrawler(max_workers=10)

# Generate URLs for multiple pages
urls = [f'https://example.com/products?page={i}' for i in range(1, 51)]
crawler.add_urls(urls)

results = crawler.start_crawling()
print(f"Total products crawled: {len(results)}")

IPFLY的無限併發支持可實現大規模並行化而無需檢測。操作可以同時部署數百個工作人員,每個工作人員使用不同的住宅IP以顯示爲分佈式合法流量。

數據存儲和管理

import sqlite3
import json
from datetime import datetime

class CrawlDataManager:
    """Manage crawled data storage"""
    
    def __init__(self, db_name='crawl_data.db'):
        self.db_name = db_name
        self.init_database()
    
    def init_database(self):
        """Initialize database schema"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS crawl_sessions (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                start_time DATETIME,
                end_time DATETIME,
                urls_crawled INTEGER,
                items_extracted INTEGER
            )
        ''')
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS products (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                session_id INTEGER,
                name TEXT,
                price REAL,
                url TEXT,
                crawled_at DATETIME,
                raw_data TEXT,
                FOREIGN KEY (session_id) REFERENCES crawl_sessions(id)
            )
        ''')
        
        conn.commit()
        conn.close()
    
    def start_session(self):
        """Start new crawl session"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        cursor.execute(
            'INSERT INTO crawl_sessions (start_time) VALUES (?)',
            (datetime.now(),)
        )
        
        session_id = cursor.lastrowid
        conn.commit()
        conn.close()
        
        return session_id
    
    def save_products(self, session_id, products):
        """Save extracted products"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        for product in products:
            cursor.execute('''
                INSERT INTO products 
                (session_id, name, price, url, crawled_at, raw_data)
                VALUES (?, ?, ?, ?, ?, ?)
            ''', (
                session_id,
                product.get('name'),
                product.get('price'),
                product.get('url'),
                datetime.now(),
                json.dumps(product)
            ))
        
        conn.commit()
        conn.close()
    
    def end_session(self, session_id, urls_crawled, items_extracted):
        """End crawl session"""
        conn = sqlite3.connect(self.db_name)
        cursor = conn.cursor()
        
        cursor.execute('''
            UPDATE crawl_sessions
            SET end_time = ?, urls_crawled = ?, items_extracted = ?
            WHERE id = ?
        ''', (datetime.now(), urls_crawled, items_extracted, session_id))
        
        conn.commit()
        conn.close()

# Usage
manager = CrawlDataManager()
session_id = manager.start_session()

# Crawl products
products = []  # Your crawled products here

manager.save_products(session_id, products)
manager.end_session(session_id, len(urls), len(products))

列表爬行的最佳實踐

從列表抓取中最大化價值需要遵循操作最佳實踐。

負責任的使用模式

import time
import random

class ResponsibleCrawler:
    """Implement responsible crawling practices"""
    
    def __init__(self, requests_per_minute=30):
        self.requests_per_minute = requests_per_minute
        self.last_request_time = 0
    
    def respectful_request(self, url):
        """Make request with appropriate delays"""
        # Calculate delay
        delay = 60.0 / self.requests_per_minute
        current_time = time.time()
        time_since_last = current_time - self.last_request_time
        
        if time_since_last < delay:
            sleep_time = delay - time_since_last
            # Add random jitter
            sleep_time += random.uniform(0, 1)
            time.sleep(sleep_time)
        
        # Make request
        response = requests.get(url)
        self.last_request_time = time.time()
        
        return response
    
    def check_robots_txt(self, base_url):
        """Check robots.txt for crawling permissions"""
        robots_url = f"{base_url}/robots.txt"
        
        try:
            response = requests.get(robots_url)
            if response.status_code == 200:
                print("robots.txt content:")
                print(response.text)
                return response.text
        except Exception as e:
            print(f"Could not fetch robots.txt: {e}")
        
        return None

# Usage
crawler = ResponsibleCrawler(requests_per_minute=30)
crawler.check_robots_txt('https://example.com')

錯誤處理和重試邏輯

import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry

class RobustCrawler:
    """Implement robust error handling"""
    
    def __init__(self):
        self.session = self.create_session()
    
    def create_session(self):
        """Create session with retry logic"""
        session = requests.Session()
        
        retry_strategy = Retry(
            total=3,
            backoff_factor=1,
            status_forcelist=[429, 500, 502, 503, 504],
            method_whitelist=["HEAD", "GET", "OPTIONS"]
        )
        
        adapter = HTTPAdapter(max_retries=retry_strategy)
        session.mount("http://", adapter)
        session.mount("https://", adapter)
        
        return session
    
    def crawl_with_error_handling(self, url):
        """Crawl with comprehensive error handling"""
        try:
            response = self.session.get(url, timeout=10)
            response.raise_for_status()
            
            return self.parse_response(response)
            
        except requests.exceptions.HTTPError as e:
            print(f"HTTP error {url}: {e}")
        except requests.exceptions.ConnectionError as e:
            print(f"Connection error {url}: {e}")
        except requests.exceptions.Timeout as e:
            print(f"Timeout {url}: {e}")
        except Exception as e:
            print(f"Unexpected error {url}: {e}")
        
        return None
    
    def parse_response(self, response):
        """Parse response with error handling"""
        try:
            soup = BeautifulSoup(response.text, 'html.parser')
            products = soup.find_all('div', class_='product-item')
            
            results = []
            for product in products:
                try:
                    data = self.extract_product(product)
                    if data:
                        results.append(data)
                except Exception as e:
                    print(f"Error extracting product: {e}")
                    continue
            
            return results
            
        except Exception as e:
            print(f"Error parsing response: {e}")
            return []
    
    def extract_product(self, element):
        """Extract product with validation"""
        name = element.find('h3')
        price = element.find('span', class_='price')
        
        if not name or not price:
            return None
        
        return {
            'name': name.text.strip(),
            'price': price.text.strip()
        }

# Usage
crawler = RobustCrawler()
results = crawler.crawl_with_error_handling('https://example.com/products')

正文完
 0
IPFLY
IPFLY
高質量代理的領先提供商
用户数
2
文章数
2730
评论数
0
阅读量
1485120