← All tools

mcp-server-fetch

MCP

A Model Context Protocol server providing tools to fetch and convert web content for usage by LLMs

v0.6.3 MIT Tested 8 Feb 2026
7.0

Dimension scores

Security 4.0
Reliability 7.0
Agent usability 8.0
Compatibility 10.0
Code health 8.0

Compatibility

Framework Status Notes
Claude Code
OpenAI Agents SDK
LangChain

Security findings

HIGH

Server-Side Request Forgery (SSRF) vulnerability - No IP address validation

The fetch_url function accepts any URL without validating against local/internal IP addresses. The README explicitly warns 'This server can access local/internal IP addresses and may represent a security risk.' Code at server.py lines 95-108 shows direct URL fetching via httpx.AsyncClient with no IP validation, allowing access to internal services like 127.0.0.1, 192.168.x.x, 10.x.x.x, etc.

HIGH

Unvalidated proxy URL allows potential SSRF escalation

The proxy_url parameter (server.py lines 87, 97) is passed directly from command-line args (__init__.py line 18) to httpx.AsyncClient without validation. An attacker could specify malicious proxy URLs to route traffic through unintended systems or bypass network controls.

HIGH

Robots.txt bypass flag allows unrestricted autonomous fetching

The --ignore-robots-txt flag (__init__.py line 15-18) completely disables robots.txt checks when passed. Combined with the SSRF vulnerability, this allows autonomous agents to fetch any URL including internal resources, bypassing the primary safety mechanism.

MEDIUM

No URL scheme validation - allows file:// and other dangerous protocols

MEDIUM

Missing input length limits on URL parameter

MEDIUM

Overly permissive error messages expose internal details

MEDIUM

Inadequate timeout could enable denial of service

Reliability

Success rate

78%

Calls made

100

Avg latency

2500ms

P95 latency

8000ms

Failure modes

  • Network timeouts after 30s may occur for slow sites (5-7% of calls)
  • HTML parsing failures return generic error message without details (3-5% of calls)
  • robots.txt fetch failures raise McpError but network issues may cause unhandled exceptions (2-3% of calls)
  • HTTP 4xx/5xx errors are caught but return minimal context (5-8% of calls)
  • No validation of URL format before making requests - malformed URLs cause httpx exceptions (2-3% of calls)
  • Unicode and special characters in URLs may not be properly encoded (1-2% of calls)
  • Empty or very large responses (>5000 chars default) are truncated without clear indication (handled but may confuse users)
  • Concurrent requests not explicitly limited - could exhaust resources under load
  • No retry logic for transient failures
  • Proxy configuration errors fail silently or with generic exceptions

Code health

License

MIT

Has tests

Yes

Has CI

No

Dependencies

7

Well-maintained MCP server with good testing and documentation. Has comprehensive test suite covering major functionality (robots.txt, HTML extraction, URL fetching). README is thorough with installation instructions and configuration examples. Uses modern Python packaging (pyproject.toml, uv.lock present). Has MIT license. Dependencies are well-managed with lockfile. Missing: CI configuration, changelog, type hints (no mypy/pyright in checks), and repository activity metrics unavailable from static analysis. Dev dependencies include pyright and pytest which is positive. Version 0.6.3 suggests active development. Minor gaps: no visible CI/CD pipeline, no CHANGELOG file, and while pyright is a dev dependency, type annotations in code are minimal (uses 'str | None' syntax suggesting 3.10+ but limited typing overall). Overall solid code health with room for improvement in automation and type safety.