What is A Model Context Protocol (MCP) server implementation for web data extraction.?
WebScraping.AI MCP Server is an implementation of the Model Context Protocol that integrates with WebScraping.AI for web data extraction capabilities. It offers features such as question answering about web page content, structured data extraction, HTML content retrieval with JavaScript rendering, and more.
Documentation
WebScraping.AI MCP Server
A Model Context Protocol (MCP) server implementation that integrates with WebScraping.AI for web data extraction capabilities.
Features
Question answering about web page content
Structured data extraction from web pages
HTML content retrieval with JavaScript rendering
Plain text extraction from web pages
CSS selector-based content extraction
Multiple proxy types (datacenter, residential) with country selection
JavaScript rendering using headless Chrome/Chromium
Global Configuration (for personal use across all projects):
Create a ~/.cursor/mcp.json file in your home directory with the same configuration format as above.
If you are using Windows and are running into issues, try using cmd /c "set WEBSCRAPING_AI_API_KEY=your-api-key && npx -y webscraping-ai-mcp" as the command.
This configuration will make the WebScraping.AI tools available to Cursor's AI agent automatically when relevant for web scraping tasks.
Available Tools# 1. Question Tool (webscraping_ai_question)
Ask questions about web page content.
{
"name": "webscraping_ai_question",
"arguments": {
"url": "https://example.com",
"question": "What is the main topic of this page?",
"timeout": 30000,
"js": true,
"js_timeout": 2000,
"wait_for": ".content-loaded",
"proxy": "datacenter",
"country": "us"
}
}
Example response:
{
"content": [
{
"type": "text",
"text": "The main topic of this page is examples and documentation for HTML and web standards."
}
],
"isError": false
}
2. Fields Tool (webscraping_ai_fields)
Extract structured data from web pages based on instructions.
The following options can be used with all scraping tools:
timeout: Maximum web page retrieval time in ms (15000 by default, maximum is 30000)
js: Execute on-page JavaScript using a headless browser (true by default)
js_timeout: Maximum JavaScript rendering time in ms (2000 by default)
wait_for: CSS selector to wait for before returning the page content
proxy: Type of proxy, datacenter or residential (residential by default)
country: Country of the proxy to use (US by default). Supported countries: us, gb, de, it, fr, ca, es, ru, jp, kr, in
custom_proxy: Your own proxy URL in "http://user:password@host:port" format
device: Type of device emulation. Supported values: desktop, mobile, tablet
error_on_404: Return error on 404 HTTP status on the target page (false by default)
error_on_redirect: Return error on redirect on the target page (false by default)
js_script: Custom JavaScript code to execute on the target page
Error Handling
The server provides robust error handling:
Automatic retries for transient errors
Rate limit handling with backoff
Detailed error messages
Network resilience
Example error response:
{
"content": [
{
"type": "text",
"text": "API Error: 429 Too Many Requests"
}
],
"isError": true
}
Integration with LLMs
This server implements the Model Context Protocol, making it compatible with any MCP-enabled LLM platforms. You can configure your LLM to use these tools for web scraping tasks.
Example: Configuring Claude with MCP
const { Claude } = require('@anthropic-ai/sdk');
const { Client } = require('@modelcontextprotocol/sdk/client/index.js');
const { StdioClientTransport } = require('@modelcontextprotocol/sdk/client/stdio.js');
const claude = new Claude({
apiKey: process.env.ANTHROPIC_API_KEY
});
const transport = new StdioClientTransport({
command: 'npx',
args: ['-y', 'webscraping-ai-mcp'],
env: {
WEBSCRAPING_AI_API_KEY: 'your-api-key'
}
});
const client = new Client({
name: 'claude-client',
version: '1.0.0'
});
await client.connect(transport);
// Now you can use Claude with WebScraping.AI tools
const tools = await client.listTools();
const response = await claude.complete({
prompt: 'What is the main topic of example.com?',
tools: tools
});
Development
git clone https://github.com/webscraping-ai/webscraping-ai-mcp-server.git
cd webscraping-ai-mcp-server
# Install dependencies
npm install
# Run tests
npm test
# Add your .env file
cp .env.example .env
# Start the inspector
npx @modelcontextprotocol/inspector node src/index.js