SC

scrapling-fetch

Created 6 months ago

An MCP server that helps AI assistants access text content from websites that implement bot detection.

development documentation public

What is scrapling-fetch?

Access text content from bot-protected websites. Fetches HTML/markdown from sites with anti-automation measures using Scrapling.

Documentation

Scrapling Fetch MCP

An MCP server that helps AI assistants access text content from websites that implement bot detection, bridging the gap between what you can see in your browser and what the AI can access.

Intended Use

This tool is optimized for low-volume retrieval of documentation and reference materials (text/HTML only) from websites that implement bot detection. It has not been designed or tested for general-purpose site scraping or data harvesting.

Note: This project was developed in collaboration with Claude Sonnet 3.7, using LLM Context.

Installation

  1. Requirements:
  • Python 3.10+
  • uv package manager
  1. Install dependencies and the tool:
    uv tool install scrapling
    scrapling install uv tool install scrapling-fetch-mcp
    

Setup with Claude

Add this configuration to your Claude client's MCP server configuration:

{
  "mcpServers": {
    "Cyber-Chitta": {
      "command": "uvx",
      "args": ["scrapling-fetch-mcp"]
    }
  }
}

Available Tools

This package provides two distinct tools:

  1. s-fetch-page: Retrieves complete web pages with pagination support
  2. s-fetch-pattern: Extracts content matching regex patterns with surrounding context

Example Usage# Fetching a Complete Page

Claude: I'll help you with that. Let me fetch the documentation. https://example.com/docs basic
Based on the documentation I retrieved, here's a summary...```

### Extracting Specific Content with Pattern Matching
```Human: Please find all mentions of "API keys" on the documentation page.
Claude: I'll search for that specific information. https://example.com/docs basic
API\s+keys? 150
I found several mentions of API keys in the documentation: ...```

## Functionality Options
- **Protection Levels**:
- `basic`: Fast retrieval (1-2 seconds) but lower success with heavily protected sites
- `stealth`: Balanced protection (3-8 seconds) that works with most sites
- `max-stealth`: Maximum protection (10+ seconds) for heavily protected sites
- **Content Targeting Options**:
- **s-fetch-page**: Retrieve entire pages with pagination support (using `start_index` and `max_length`)
- **s-fetch-pattern**: Extract specific content using regular expressions (with `search_pattern` and `context_chars`)
- Results include position information for follow-up queries with `s-fetch-page`

## Tips for Best Results
- Start with `basic` mode and only escalate to higher protection levels if needed
- For large documents, use the pagination parameters with `s-fetch-page`
- Use `s-fetch-pattern` when looking for specific information on large pages
- The AI will automatically adjust its approach based on the site's protection level

## Limitations
- **Designed only for text content**: Specifically for documentation, articles, and reference materials
- Not designed for high-volume scraping or data harvesting
- May not work with sites requiring authentication
- Performance varies by site complexity

Server Config

{
  "mcpServers": {
    "scrapling-fetch-server": {
      "command": "npx",
      "args": [
        "scrapling-fetch"
      ]
    }
  }
}

Links & Status

Repository: github.com
Hosted: No
Global: No
Official: Yes

Project Info

Hosted Featured
Created At: May 23, 2025
Updated At: Aug 07, 2025
Author: Claude Sonnet 3.7
Category: community
License: Apache 2.0
Tags:
development documentation public