<img src={require('./img/graph1.png').default} alt="GraphQL MCP Server" width="900" height="450" /> <br/> AI assistants like Claude are powerful — but they become truly unstoppable when they can talk directly to your APIs. The **GraphQL MCP Server** bridges that gap: point it at any GraphQL endpoint and it instantly exposes every query and mutation as an AI-callable tool. No manual wiring. No hardcoded queries. Just pure, schema-driven automation. This blog walks through the architecture, configuration, and internals of this server — explaining **what each component actually does** so you can deploy and extend it with confidence. Official MCP Documentation: https://modelcontextprotocol.io/docs Reference Implementation (GraphQL MCP Server): https://github.com/nifetency/nife-mcp-graphql --- ## Why Build a GraphQL MCP Server Modern AI workflows demand real-time data. A static Claude conversation can summarize, reason, and generate — but it cannot fetch your live inventory, create a Jira ticket, or query your user database without a tool layer. The GraphQL MCP Server solves this by: - Auto-discovering every query and mutation from any GraphQL schema - Generating typed MCP tools that Claude (or any MCP client) can invoke - Eliminating the need to write a custom tool for every API operation - Staying in sync with your API automatically — no maintenance required - Running in both `stdio` mode (for Claude Desktop) and `http` mode (for Docker/Kubernetes) --- ## Step 1: Understanding the Architecture <img src={require('./img/graph2.png').default} alt="GraphQL MCP Architecture" width="900" height="450" /> <br/> The server is built from three core components that work together. ### The GraphQL Client ```python class GraphQLClient: def __init__(self, endpoint: str, access_token: Optional[str] = None): self.endpoint = endpoint self.access_token = access_token ``` **What it does:** - Sends HTTP POST requests to the GraphQL endpoint - Attaches `Authorization: Bearer <token>` headers when a token is provided - Handles timeouts, connection errors, and HTTP errors gracefully - Returns a normalized `{ data, errors }` dict on every call **When to use it:** - It is the execution layer — every query and mutation eventually flows through `GraphQLClient.execute_query()` --- ### The Schema Manager ```python class SchemaManager: def load_schema(self) -> bool: ... def build_query(self, query_name, args, fields, custom_fields) -> str: ... def build_mutation(self, mutation_name, input_data, return_fields) -> str: ... ``` **What it does:** - Fires a full GraphQL introspection query on startup - Parses the `__schema` response and populates three caches: `queries_cache`, `mutations_cache`, and `types_cache` - Dynamically builds valid GraphQL query strings at runtime — no templates needed - Supports three field selection modes: `auto` (scalars only), `all` (every field), and `custom` (caller-specified) **Why this matters:** - Your API can have 300 queries. Without introspection, you would hand-write 300 tool definitions. With this manager, it takes zero lines of code. --- ### The MCP Server ```python class GraphQLMCPServer: async def initialize(self) -> bool: ... def _generate_tools(self): ... async def handle_tool_call(self, tool_name, arguments) -> dict: ... async def run(self): ... ``` **What it does:** - Orchestrates initialization, tool generation, and request routing - Exposes a JSON-RPC 2.0 interface over `stdin/stdout` (stdio mode) or HTTP - Routes incoming tool calls to the correct query or mutation handler - Adds six built-in utility tools for schema exploration --- ## Step 2: Installation ```bash pip install graphql-mcp-server ``` Alternative ready-to-use MCP server (PyPI): https://pypi.org/project/nife-restapi-mcp-server/ The package requires Python 3.11+ and depends on `requests`, `aiohttp`, and `python-dotenv`. --- ## Step 3: Configuration <img src={require('./img/graph3.png').default} alt="GraphQL MCP Configuration Methods" width="900" height="450" /> <br/> There are **four ways** to configure the server, applied in priority order — higher entries win: | Priority | Method | Best For | |----------|--------|----------| | 1 | CLI arguments | One-off runs, testing | | 2 | Environment variables | Docker, CI/CD, shell scripts | | 3 | `.env` file | Local development | | 4 | Defaults | Nothing required | ### Method 1 — CLI Arguments ```bash graphql-mcp-server --endpoint https://api.example.com/graphql --token mytoken ``` All available flags: ``` --endpoint URL GraphQL API endpoint (required if not set via env) --token TOKEN API access token for authentication --mode stdio|http Server mode (default: stdio) --port PORT HTTP port, only used in http mode (default: 8080) --host HOST HTTP host, only used in http mode (default: 0.0.0.0) --log-level LEVEL Logging level: DEBUG, INFO, WARNING, ERROR (default: INFO) --env-file PATH Path to a custom .env file ``` ### Method 2 — Environment Variables ```bash export GRAPHQL_ENDPOINT=https://api.example.com/graphql export API_ACCESS_TOKEN=your_token_here export MCP_MODE=stdio graphql-mcp-server ``` ### Method 3 — `.env` File Create a `.env` in your working directory: ```env GRAPHQL_ENDPOINT=https://api.example.com/graphql API_ACCESS_TOKEN=your_token_here MCP_MODE=stdio LOG_LEVEL=INFO ``` Then simply run: ```bash graphql-mcp-server ``` ### Method 4 — Claude Desktop Config (most common for MCP use) No `.env` file needed. Pass everything via the `env` block in `claude_desktop_config.json`: ```json { "mcpServers": { "graphql": { "command": "graphql-mcp-server", "env": { "GRAPHQL_ENDPOINT": "https://api.example.com/graphql", "API_ACCESS_TOKEN": "your_token_here", "MCP_MODE": "stdio", "ENABLE_HTTP_ENDPOINT": "false" } } } } ``` Claude Desktop injects the `env` block values directly into the process — this is the standard MCP pattern. --- ## Step 4: Environment Variable Reference | Variable | Required | Default | Description | |----------|----------|---------|-------------| | `GRAPHQL_ENDPOINT` | Yes | — | GraphQL API URL | | `API_ACCESS_TOKEN` | No | — | Bearer token for auth | | `MCP_MODE` | No | `stdio` | `stdio` or `http` | | `ENABLE_HTTP_ENDPOINT` | No | `true` | Enable HTTP health/metrics | | `MCP_SERVER_PORT` | No | `8080` | HTTP server port | | `MCP_SERVER_HOST` | No | `0.0.0.0` | HTTP server host | | `LOG_LEVEL` | No | `INFO` | Logging verbosity | | `QUERY_TIMEOUT` | No | `30` | Request timeout in seconds | | `SCHEMA_CACHE_TTL` | No | `3600` | Schema cache TTL in seconds | --- ## Step 5: How Schema Introspection Works When the server starts, it fires this introspection query against your API: ```graphql query IntrospectionQuery { __schema { queryType { name } mutationType { name } types { kind name description fields(includeDeprecated: true) { name description type { kind name ofType { kind name } } args { name type { kind name ofType { kind name } } defaultValue } } inputFields { name type { kind name ofType { kind name } } } } } } ``` **What it does:** - Fetches every type, field, argument, and return type in the schema - Builds `queries_cache`, `mutations_cache`, and `types_cache` internally - Resolves nested `NON_NULL` and `LIST` wrappers to get the real base type name **When to use it:** - This runs automatically on server start. You never call it manually. --- ## Step 6: Dynamic Tool Generation <img src={require('./img/graph4.png').default} alt="GraphQL MCP Tool Generation" width="900" height="450" /> <br/> Once the schema is loaded, tools are generated like this: ```python for query in queries: tool_name = f"query_{query['name']}" self.tools[tool_name] = { "name": tool_name, "description": f"Execute query: {query['name']} - {query['description']}", "inputSchema": { "type": "object", "properties": { "args": { "type": "object", "default": {} }, "fields": { "type": "string", "enum": ["auto", "all", "custom"], "default": "auto" }, "custom_fields": { "type": "array", "items": { "type": "string" } } } } } ``` **What it does:** - Every query in your schema becomes a `query_<name>` MCP tool - Every mutation becomes a `mutation_<name>` MCP tool - Tool descriptions are pulled directly from your schema's `description` fields - Input schemas are typed and validated **Why it's powerful:** - A 300-query API generates 300 tools automatically at startup --- ## Step 7: Built-in Utility Tools Beyond your schema's own operations, the server adds six exploration tools: - `list_available_queries` — list all available queries, with optional search filter - `list_available_mutations` — list all available mutations - `get_schema_info` — get stats or details about a specific type - `get_query_signature` — get the human-readable signature of any query - `execute_custom_query` — run a raw GraphQL query string with variables - `health_check` — verify the server status, schema load state, and tool count **Use case:** - Drop Claude into a new project, ask it to `list_available_queries`, and it can immediately tell you what the API supports — no documentation needed. --- ## Step 8: Operating Modes ### stdio Mode — for Claude Desktop and MCP Clients ```bash graphql-mcp-server --endpoint https://api.example.com/graphql --mode stdio ``` **What it does:** - Communicates via `stdin`/`stdout` - No HTTP server is started - This is the default mode **When to use it:** - Local development with Claude Desktop, Cursor, or any MCP-compatible client ### HTTP Mode — for Docker and Kubernetes ```bash graphql-mcp-server --endpoint https://api.example.com/graphql --mode http --port 8080 ``` **What it does:** - Starts an `aiohttp` HTTP server - Keeps the process alive with `asyncio.Event().wait()` - Exposes four HTTP endpoints: ``` GET /health → server health and schema status GET /metrics → uptime, tool counts, schema stats GET /schema → schema summary GET /tools → all generated tools with type labels ``` --- ## Step 9: Docker Deployment ```bash docker build -t graphql-mcp-server . docker run -p 8080:8080 \ -e GRAPHQL_ENDPOINT=https://api.example.com/graphql \ -e API_ACCESS_TOKEN=your_token \ -e MCP_MODE=http \ graphql-mcp-server ``` Or with Docker Compose: ```bash cp .env.example .env docker-compose up ``` ⚠️ Important: Always use `MCP_MODE=http` in Docker. The `stdio` mode waits for stdin input and will cause the container to exit immediately when run detached. --- ## Real Scenario ```bash graphql-mcp-server --endpoint https://myapi.com/graphql --mode stdio ``` → Server starts and introspects the schema ``` ✓ GraphQL schema loaded successfully - 42 queries - 18 mutations - 97 types ✓ Server ready with 66 tools! ``` Claude now calls `list_available_queries` → discovers `getUserById`, `listOrders`, `getProductInventory` ```graphql query GetUserById { getUserById(id: "123") { id name email createdAt } } ``` → Schema Manager auto-builds and executes this query, returning live data to Claude --- ## Best Practices - Set `SCHEMA_CACHE_TTL` to avoid re-introspecting on every restart - Use `fields: "auto"` (scalar-only) by default to keep responses lean - Use `fields: "custom"` when you need nested object fields - Always use HTTP mode in containerized deployments - Use `get_query_signature` before executing unfamiliar queries - Use `execute_custom_query` for one-off complex queries needing full GQL control --- ## Troubleshooting **`[ERROR] GraphQL endpoint not configured`** Set `GRAPHQL_ENDPOINT` via any method above: ```bash graphql-mcp-server --endpoint https://your-api.com/graphql ``` **`Introspection response has no 'data' field`** Check that your endpoint URL includes the full `/graphql` path. A common mistake is pointing at `https://api.example.com` instead of `https://api.example.com/graphql`. **Docker container exits immediately** Switch to HTTP mode: ```bash docker run -e MCP_MODE=http -e GRAPHQL_ENDPOINT=... graphql-mcp-server ``` **Port already in use** Change the port: ```bash graphql-mcp-server --mode http --port 9090 ``` --- ## Conclusion The GraphQL MCP Server turns any GraphQL API into a fully AI-callable tool layer with zero manual configuration. By leveraging schema introspection, it stays in sync with your API automatically — no stale tool definitions, no missed operations, no maintenance burden. Whether you are building an AI assistant that manages your SaaS platform, a Claude workflow that reads and writes to your database, or a DevOps bot that queries your infrastructure APIs, this server gives you a production-ready foundation to build on. Reference: https://github.com/modelcontextprotocol/specification