What is MCP?
The Model Context Protocol (MCP) is an open protocol created by Anthropic that standardizes how AI applications connect to external data sources and tools. Think of it as the "USB-C of AI" — a universal connector that lets any AI model talk to any tool through a consistent interface.
Before MCP, every AI tool had to build custom integrations for every service. Want Claude to query your database? Build a custom integration. Want it to read your Notion docs? Another custom integration. MCP solves this by defining a standard protocol: build one MCP server for your database, and every MCP-compatible AI tool can use it.
MCP Architecture at a Glance
- MCP Host: The AI application (Claude Code, Claude Desktop, Cursor) that wants to use tools
- MCP Client: Built into the host, manages connections to MCP servers
- MCP Server: A lightweight service that exposes tools, resources, or prompts to the AI
- Transport: How client and server communicate (stdio for local, HTTP/SSE for remote)
What MCP Servers Expose
An MCP server can expose three types of capabilities:
| Capability | Description | Example |
|---|---|---|
| Tools | Functions the AI can call with parameters | query_database(sql), create_issue(title, body) |
| Resources | Data the AI can read (like files or records) | Database schemas, config files, API docs |
| Prompts | Pre-built prompt templates for common tasks | analyze_table(table_name), review_pr(pr_number) |
Popular MCP Servers
| Server | What It Does | Use Case |
|---|---|---|
| filesystem | Read/write files with access controls | Safely expose specific directories to AI |
| postgres / sqlite | Query databases with read-only or read-write access | Let AI explore and query your database |
| github | Create issues, PRs, read repos, manage branches | AI-powered project management and code review |
| slack | Read channels, send messages, search history | AI assistant that answers questions from Slack context |
| puppeteer | Browser automation — navigate, click, screenshot | AI-driven testing, web scraping, visual verification |
| memory | Persistent key-value knowledge store | AI remembers context across sessions |
| fetch | Make HTTP requests to APIs | Query external services, webhooks |
Connecting MCP to Claude Code
MCP servers are configured in your Claude Code settings or project configuration. Here is how to set up common MCP servers:
// .claude/settings.json — MCP server configuration
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost:5432/mydb"
}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_your_token_here"
}
},
"filesystem": {
"command": "npx",
"args": [
"-y", "@modelcontextprotocol/server-filesystem",
"/path/to/allowed/directory"
]
},
"puppeteer": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-puppeteer"]
}
}
}
Building a Custom MCP Server
When existing MCP servers do not cover your needs, you can build your own. Here is a complete example of an MCP server that queries your application's database and provides domain-specific tools.
// custom-mcp-server/src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import pg from "pg";
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
});
const server = new McpServer({
name: "acme-internal",
version: "1.0.0",
});
// Tool: Query users by various criteria
server.tool(
"search_users",
"Search for users by name, email, or role",
{
query: z.string().describe("Search term for name or email"),
role: z.enum(["admin", "editor", "viewer"]).optional()
.describe("Filter by user role"),
limit: z.number().default(10).describe("Max results to return"),
},
async ({ query, role, limit }) => {
let sql = "SELECT id, name, email, role, created_at FROM users WHERE 1=1";
const params: any[] = [];
if (query) {
params.push(`%${query}%`);
sql += ` AND (name ILIKE $${params.length} OR email ILIKE $${params.length})`;
}
if (role) {
params.push(role);
sql += ` AND role = $${params.length}`;
}
params.push(limit);
sql += ` LIMIT $${params.length}`;
const result = await pool.query(sql, params);
return {
content: [{
type: "text",
text: JSON.stringify(result.rows, null, 2),
}],
};
}
);
// Tool: Get application metrics
server.tool(
"get_metrics",
"Get application metrics for a date range",
{
metric: z.enum(["signups", "revenue", "active_users", "churn"])
.describe("Which metric to retrieve"),
startDate: z.string().describe("Start date (YYYY-MM-DD)"),
endDate: z.string().describe("End date (YYYY-MM-DD)"),
},
async ({ metric, startDate, endDate }) => {
const queries: Record<string, string> = {
signups: "SELECT DATE(created_at) as date, COUNT(*) as value FROM users WHERE created_at BETWEEN $1 AND $2 GROUP BY DATE(created_at) ORDER BY date",
revenue: "SELECT DATE(created_at) as date, SUM(amount) as value FROM payments WHERE created_at BETWEEN $1 AND $2 GROUP BY DATE(created_at) ORDER BY date",
active_users: "SELECT DATE(last_active) as date, COUNT(*) as value FROM users WHERE last_active BETWEEN $1 AND $2 GROUP BY DATE(last_active) ORDER BY date",
churn: "SELECT DATE(cancelled_at) as date, COUNT(*) as value FROM subscriptions WHERE cancelled_at BETWEEN $1 AND $2 GROUP BY DATE(cancelled_at) ORDER BY date",
};
const result = await pool.query(queries[metric], [startDate, endDate]);
return {
content: [{
type: "text",
text: JSON.stringify({
metric,
period: `${startDate} to ${endDate}`,
data: result.rows,
}, null, 2),
}],
};
}
);
// Resource: Expose database schema
server.resource(
"schema",
"database://schema",
async () => {
const result = await pool.query(`
SELECT table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position
`);
return {
contents: [{
uri: "database://schema",
mimeType: "application/json",
text: JSON.stringify(result.rows, null, 2),
}],
};
}
);
// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);
// package.json for the custom MCP server
{
"name": "acme-mcp-server",
"version": "1.0.0",
"type": "module",
"scripts": {
"build": "tsc",
"start": "node dist/index.js"
},
"dependencies": {
"@modelcontextprotocol/sdk": "^1.0.0",
"pg": "^8.13.0",
"zod": "^3.23.0"
},
"devDependencies": {
"@types/pg": "^8.11.0",
"typescript": "^5.6.0"
}
}
When to Build Custom MCP vs Use Existing
| Build Custom When... | Use Existing When... |
|---|---|
| You need domain-specific queries with business logic | You just need raw database access |
| You want to enforce access controls or data masking | The existing server's security model is sufficient |
| You integrate with internal APIs not covered by existing servers | The service is a common one like GitHub, Slack, or PostgreSQL |
| You want to compose multiple data sources into unified tools | Simple, single-service access is enough |
MCP is the Future of AI Tooling
MCP is rapidly becoming the standard way AI tools connect to the world. As the ecosystem matures, expect MCP servers for every major service, database, and API. Learning to build and use MCP servers now puts you ahead of the curve. Start with existing servers for your common tools, then build custom servers for your domain-specific needs.
Summary
MCP standardizes how AI connects to external tools and data. Instead of building custom integrations, you configure MCP servers that any AI tool can use. Start with the popular servers (postgres, github, filesystem), then build custom servers when you need domain-specific functionality. The protocol is simple — define tools with input schemas, implement handlers, and connect via stdio or HTTP. This infrastructure layer is what transforms AI from a code generator into a development partner with real-world awareness.