- MCP Python SDK
The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio and SSE
- Handle all MCP protocol messages and lifecycle events
We recommend using uv to manage your Python projects.
If you haven't created a uv-managed project yet, create one:
uv init mcp-server-demo cd mcp-server-demo
Then add MCP to your project dependencies:
uv add "mcp[cli]"
Alternatively, for projects using pip for dependencies:
pip install "mcp[cli]"
To run the mcp command with uv:
uv run mcp
Let's create a simple MCP server that exposes a calculator tool and some data:
# server.pyfrommcp.server.fastmcpimportFastMCP# Create an MCP servermcp=FastMCP("Demo") # Add an addition tool@mcp.tool()defadd(a: int, b: int) ->int: """Add two numbers"""returna+b# Add a dynamic greeting resource@mcp.resource("greeting://{name}")defget_greeting(name: str) ->str: """Get a personalized greeting"""returnf"Hello, {name}!"
You can install this server in Claude Desktop and interact with it right away by running:
mcp install server.py
Alternatively, you can test it with the MCP Inspector:
mcp dev server.py
The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through Prompts (reusable templates for LLM interactions)
- And more!
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
# Add lifespan support for startup/shutdown with strong typingfromcontextlibimportasynccontextmanagerfromcollections.abcimportAsyncIteratorfromdataclassesimportdataclassfromfake_databaseimportDatabase# Replace with your actual DB typefrommcp.server.fastmcpimportContext, FastMCP# Create a named servermcp=FastMCP("My App") # Specify dependencies for deployment and developmentmcp=FastMCP("My App", dependencies=["pandas", "numpy"]) @dataclassclassAppContext: db: Database@asynccontextmanagerasyncdefapp_lifespan(server: FastMCP) ->AsyncIterator[AppContext]: """Manage application lifecycle with type-safe context"""# Initialize on startupdb=awaitDatabase.connect() try: yieldAppContext(db=db) finally: # Cleanup on shutdownawaitdb.disconnect() # Pass lifespan to servermcp=FastMCP("My App", lifespan=app_lifespan) # Access type-safe lifespan context in tools@mcp.tool()defquery_db(ctx: Context) ->str: """Tool that uses initialized resources"""db=ctx.request_context.lifespan_context.dbreturndb.query()
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
frommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App") @mcp.resource("config://app")defget_config() ->str: """Static configuration data"""return"App configuration here"@mcp.resource("users://{user_id}/profile")defget_user_profile(user_id: str) ->str: """Dynamic user data"""returnf"Profile data for user {user_id}"
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
importhttpxfrommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App") @mcp.tool()defcalculate_bmi(weight_kg: float, height_m: float) ->float: """Calculate BMI given weight in kg and height in meters"""returnweight_kg/ (height_m**2) @mcp.tool()asyncdeffetch_weather(city: str) ->str: """Fetch current weather for a city"""asyncwithhttpx.AsyncClient() asclient: response=awaitclient.get(f"https://api.weather.com/{city}") returnresponse.text
Prompts are reusable templates that help LLMs interact with your server effectively:
frommcp.server.fastmcpimportFastMCPfrommcp.server.fastmcp.promptsimportbasemcp=FastMCP("My App") @mcp.prompt()defreview_code(code: str) ->str: returnf"Please review this code:\n\n{code}"@mcp.prompt()defdebug_error(error: str) ->list[base.Message]: return [ base.UserMessage("I'm seeing this error:"), base.UserMessage(error), base.AssistantMessage("I'll help debug that. What have you tried so far?"), ]
FastMCP provides an Image
class that automatically handles image data:
frommcp.server.fastmcpimportFastMCP, ImagefromPILimportImageasPILImagemcp=FastMCP("My App") @mcp.tool()defcreate_thumbnail(image_path: str) ->Image: """Create a thumbnail from an image"""img=PILImage.open(image_path) img.thumbnail((100, 100)) returnImage(data=img.tobytes(), format="png")
The Context object gives your tools and resources access to MCP capabilities:
frommcp.server.fastmcpimportFastMCP, Contextmcp=FastMCP("My App") @mcp.tool()asyncdeflong_task(files: list[str], ctx: Context) ->str: """Process multiple files with progress tracking"""fori, fileinenumerate(files): ctx.info(f"Processing {file}") awaitctx.report_progress(i, len(files)) data, mime_type=awaitctx.read_resource(f"file://{file}") return"Processing complete"
The fastest way to test and debug your server is with the MCP Inspector:
mcp dev server.py # Add dependencies mcp dev server.py --with pandas --with numpy # Mount local code mcp dev server.py --with-editable .
Once your server is ready, install it in Claude Desktop:
mcp install server.py # Custom name mcp install server.py --name "My Analytics Server"# Environment variables mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://... mcp install server.py -f .env
For advanced scenarios like custom deployments:
frommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App") if__name__=="__main__": mcp.run()
Run it with:
python server.py # or mcp run server.py
You can mount the SSE server to an existing ASGI server using the sse_app
method. This allows you to integrate the SSE server with other ASGI applications.
fromstarlette.applicationsimportStarlettefromstarlette.routingimportMount, Hostfrommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App") # Mount the SSE server to the existing ASGI serverapp=Starlette( routes=[ Mount('/', app=mcp.sse_app()), ] ) # or dynamically mount as hostapp.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
For more information on mounting applications in Starlette, see the Starlette documentation.
A simple server demonstrating resources, tools, and prompts:
frommcp.server.fastmcpimportFastMCPmcp=FastMCP("Echo") @mcp.resource("echo://{message}")defecho_resource(message: str) ->str: """Echo a message as a resource"""returnf"Resource echo: {message}"@mcp.tool()defecho_tool(message: str) ->str: """Echo a message as a tool"""returnf"Tool echo: {message}"@mcp.prompt()defecho_prompt(message: str) ->str: """Create an echo prompt"""returnf"Please process this message: {message}"
A more complex example showing database integration:
importsqlite3frommcp.server.fastmcpimportFastMCPmcp=FastMCP("SQLite Explorer") @mcp.resource("schema://main")defget_schema() ->str: """Provide the database schema as a resource"""conn=sqlite3.connect("database.db") schema=conn.execute("SELECT sql FROM sqlite_master WHERE type='table'").fetchall() return"\n".join(sql[0] forsqlinschemaifsql[0]) @mcp.tool()defquery_data(sql: str) ->str: """Execute SQL queries safely"""conn=sqlite3.connect("database.db") try: result=conn.execute(sql).fetchall() return"\n".join(str(row) forrowinresult) exceptExceptionase: returnf"Error: {str(e)}"
For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
fromcontextlibimportasynccontextmanagerfromcollections.abcimportAsyncIteratorfromfake_databaseimportDatabase# Replace with your actual DB typefrommcp.serverimportServer@asynccontextmanagerasyncdefserver_lifespan(server: Server) ->AsyncIterator[dict]: """Manage server startup and shutdown lifecycle."""# Initialize resources on startupdb=awaitDatabase.connect() try: yield {"db": db} finally: # Clean up on shutdownawaitdb.disconnect() # Pass lifespan to serverserver=Server("example-server", lifespan=server_lifespan) # Access lifespan context in handlers@server.call_tool()asyncdefquery_db(name: str, arguments: dict) ->list: ctx=server.request_contextdb=ctx.lifespan_context["db"] returnawaitdb.query(arguments["query"])
The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers
importmcp.server.stdioimportmcp.typesastypesfrommcp.server.lowlevelimportNotificationOptions, Serverfrommcp.server.modelsimportInitializationOptions# Create a server instanceserver=Server("example-server") @server.list_prompts()asyncdefhandle_list_prompts() ->list[types.Prompt]: return [ types.Prompt( name="example-prompt", description="An example prompt template", arguments=[ types.PromptArgument( name="arg1", description="Example argument", required=True ) ], ) ] @server.get_prompt()asyncdefhandle_get_prompt( name: str, arguments: dict[str, str] |None ) ->types.GetPromptResult: ifname!="example-prompt": raiseValueError(f"Unknown prompt: {name}") returntypes.GetPromptResult( description="Example prompt", messages=[ types.PromptMessage( role="user", content=types.TextContent(type="text", text="Example prompt text"), ) ], ) asyncdefrun(): asyncwithmcp.server.stdio.stdio_server() as (read_stream, write_stream): awaitserver.run( read_stream, write_stream, InitializationOptions( server_name="example", server_version="0.1.0", capabilities=server.get_capabilities( notification_options=NotificationOptions(), experimental_capabilities={}, ), ), ) if__name__=="__main__": importasyncioasyncio.run(run())
The SDK provides a high-level client interface for connecting to MCP servers:
frommcpimportClientSession, StdioServerParameters, typesfrommcp.client.stdioimportstdio_client# Create server parameters for stdio connectionserver_params=StdioServerParameters( command="python", # Executableargs=["example_server.py"], # Optional command line argumentsenv=None, # Optional environment variables ) # Optional: create a sampling callbackasyncdefhandle_sampling_message( message: types.CreateMessageRequestParams, ) ->types.CreateMessageResult: returntypes.CreateMessageResult( role="assistant", content=types.TextContent( type="text", text="Hello, world! from model", ), model="gpt-3.5-turbo", stopReason="endTurn", ) asyncdefrun(): asyncwithstdio_client(server_params) as (read, write): asyncwithClientSession( read, write, sampling_callback=handle_sampling_message ) assession: # Initialize the connectionawaitsession.initialize() # List available promptsprompts=awaitsession.list_prompts() # Get a promptprompt=awaitsession.get_prompt( "example-prompt", arguments={"arg1": "value"} ) # List available resourcesresources=awaitsession.list_resources() # List available toolstools=awaitsession.list_tools() # Read a resourcecontent, mime_type=awaitsession.read_resource("file://some/path") # Call a toolresult=awaitsession.call_tool("tool-name", arguments={"arg1": "value"}) if__name__=="__main__": importasyncioasyncio.run(run())
The MCP protocol defines three core primitives that servers can implement:
Primitive | Control | Description | Example Use |
---|---|---|---|
Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
Resources | Application-controlled | Contextual data managed by the client application | File contents, API responses |
Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |
MCP servers declare capabilities during initialization:
Capability | Feature Flag | Description |
---|---|---|
prompts | listChanged | Prompt template management |
resources | subscribe listChanged | Resource exposure and updates |
tools | listChanged | Tool discovery and execution |
logging | - | Server logging configuration |
completion | - | Argument completion suggestions |
- Model Context Protocol documentation
- Model Context Protocol specification
- Officially supported servers
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the contributing guide to get started.
This project is licensed under the MIT License - see the LICENSE file for details.