AI Integration
Padrone provides three ways to expose your CLI to AI assistants and external services:
- Model Context Protocol (MCP) (experimental) — Standard protocol supported by Claude, Cursor, Windsurf, and other AI tools. Works over HTTP or stdio.
- REST Server (experimental) — HTTP endpoints with OpenAPI docs. Each command becomes a route.
- Vercel AI SDK — Programmatic integration for building AI-powered applications.
Model Context Protocol (MCP) (experimental)
Section titled “Model Context Protocol (MCP) (experimental)”Experimental: This API is experimental and may change in future releases.
The Model Context Protocol is an open standard that lets AI assistants discover and use your CLI commands as tools. Padrone implements the 2025-11-25 MCP spec with Streamable HTTP and stdio transports.
Quick Start
Section titled “Quick Start”Every Padrone program has a built-in mcp command:
# Start an MCP server over HTTP (default)myapp mcp
# Start over stdio (for local tool integration)myapp mcp stdio
# Custom port and hostmyapp mcp --port 8080 --host 0.0.0.0How It Works
Section titled “How It Works”When you run myapp mcp, Padrone:
- Collects all non-hidden commands that have an action or schema
- Exposes each as an MCP tool with a JSON Schema derived from your Zod definitions
- Handles the JSON-RPC protocol (initialize, tools/list, tools/call, ping, etc.)
For example, a CLI with greet and deploy commands becomes two MCP tools that AI assistants can discover and call.
Programmatic Usage
Section titled “Programmatic Usage”You can also start the MCP server from code:
import { createPadrone } from 'padrone';import * as z from 'zod/v4';
const program = createPadrone('myapp') .configure({ version: '1.0.0' }) .command('greet', (c) => c .arguments(z.object({ name: z.string().describe('Name to greet') }), { positional: ['name'] }) .action((args) => `Hello, ${args.name}!`) );
// Start MCP server programmaticallyawait program.mcp({ port: 3000, host: '127.0.0.1' });Configuration
Section titled “Configuration”The .mcp() method and mcp command accept these options:
| Option | Type | Default | Description |
|---|---|---|---|
transport | 'http' | 'stdio' | 'http' | Transport mode |
port | number | 3000 | HTTP port |
host | string | '127.0.0.1' | HTTP host |
basePath | string | '/mcp' | HTTP endpoint path |
name | string | program name | Server name |
version | string | program version | Server version |
cors | string | false | '*' | CORS allowed origin, or false to disable |
Transports
Section titled “Transports”Streamable HTTP (default) — Starts an HTTP server. Responds with application/json or text/event-stream (SSE) based on the client’s Accept header, per the MCP spec. Includes session management with MCP-Session-Id headers.
stdio — Communicates over stdin/stdout with newline-delimited JSON. Use this when the AI tool launches your CLI as a subprocess (e.g., Claude Desktop, mcp-cli).
Disabling MCP
Section titled “Disabling MCP”To disable the built-in mcp command in cli():
program.cli({ mcp: false });Tool Naming
Section titled “Tool Naming”Commands are exposed as MCP tools using dot-separated names: nested.sub for a subcommand sub under nested. This follows the MCP tool naming spec ([A-Za-z0-9_\-\.]).
Tips for AI Readability
Section titled “Tips for AI Readability”Use .describe() on your Zod fields and .configure({ description }) on commands — these become the tool descriptions that AI models read to understand your CLI.
REST Server (experimental)
Section titled “REST Server (experimental)”Experimental: This API is experimental and may change in future releases.
Padrone can expose your CLI as a REST API with automatic OpenAPI documentation. Each command becomes an HTTP endpoint.
Quick Start
Section titled “Quick Start”Every Padrone program has a built-in serve command:
# Start a REST server (default port 3000)myapp serve
# Custom port and hostmyapp serve --port 8080 --host 0.0.0.0
# Custom base pathmyapp serve --base-path /api/How It Works
Section titled “How It Works”When you run myapp serve, Padrone:
- Collects all non-hidden commands that have an action or schema
- Maps each to a URL path (e.g.,
users list→/users/list) - For each request, converts query params (GET) or JSON body (POST) to CLI flags and calls
eval() - Returns structured JSON responses
Mutation Commands
Section titled “Mutation Commands”Commands configured with mutation: true only accept POST requests. This is useful for commands that create, update, or delete data:
const program = createPadrone('api') .command('users', (c) => c .command('list', (c) => c.action(() => db.users.findMany()) ) .command('create', (c) => c .configure({ mutation: true }) .arguments(z.object({ name: z.string(), email: z.string() })) .action((args) => db.users.create(args)) ) );
await program.serve({ port: 3000 });// GET /users/list → 200 OK// POST /users/create { "name": "Alice" } → 200 OK// GET /users/create?name=Alice → 405 Method Not AllowedThe mutation flag also affects MCP (sets annotations.destructiveHint) and Vercel AI SDK (defaults needsApproval to true).
Programmatic Usage
Section titled “Programmatic Usage”await program.serve({ port: 3000, host: '127.0.0.1', basePath: '/api/', cors: 'https://example.com',});Configuration
Section titled “Configuration”| Option | Type | Default | Description |
|---|---|---|---|
port | number | 3000 | HTTP port |
host | string | '127.0.0.1' | HTTP host |
basePath | string | '/' | Base path prefix for all routes |
cors | string | false | '*' | CORS allowed origin, or false to disable |
builtins | object | all true | Toggle built-in endpoints (health, help, schema, docs) |
onRequest | function | — | Hook to run before each request (auth, rate-limiting) |
onError | function | — | Custom error response handler |
Built-in Endpoints
Section titled “Built-in Endpoints”| Endpoint | Description |
|---|---|
GET /_health | Returns { status: "ok" } |
GET /_help | Program help (JSON or markdown based on Accept header) |
GET /_help/:path | Command-specific help |
GET /_schema | JSON Schema map of all commands |
GET /_schema/:path | JSON Schema for a single command |
GET /_docs | Interactive API documentation (powered by Scalar) |
GET /_openapi | Raw OpenAPI 3.1.0 JSON spec |
Response Format
Section titled “Response Format”Success (200):
{ "ok": true, "result": <action return value> }Validation error (400):
{ "ok": false, "error": "validation", "issues": [{ "path": ["name"], "message": "Required" }] }Not found (404):
{ "ok": false, "error": "not_found", "message": "Command not found: users update" }Disabling Serve
Section titled “Disabling Serve”To disable the built-in serve command in cli():
program.cli({ serve: false });Vercel AI SDK
Section titled “Vercel AI SDK”Padrone provides first-class support for the Vercel AI SDK, allowing you to expose your CLI commands as tools that AI models can use.
Overview
Section titled “Overview”The .tool() method converts your Padrone program into a Vercel AI SDK compatible tool. This lets AI assistants:
- Understand your CLI’s capabilities through the schema
- Execute commands with proper type validation
- Receive structured responses
Basic Setup
Section titled “Basic Setup”import { streamText } from 'ai';import { createPadrone } from 'padrone';import * as z from 'zod/v4';
// Define your CLIconst weatherCli = createPadrone('weather') .command('current', (c) => c .configure({ description: 'Get current weather for a city' }) .arguments( z.object({ city: z.string().describe('City name'), units: z.enum(['celsius', 'fahrenheit']).default('celsius'), }), { positional: ['city'] } ) .action(async (args) => { // Fetch weather data... return { city: args.city, temperature: 22, units: args.units, condition: 'Sunny', }; }) ) .command('forecast', (c) => c .configure({ description: 'Get weather forecast' }) .arguments( z.object({ city: z.string().describe('City name'), days: z.number().default(3).describe('Number of days'), }), { positional: ['city'] } ) .action(async (args) => { return { city: args.city, forecast: [ { day: 'Mon', temp: 22 }, { day: 'Tue', temp: 24 }, { day: 'Wed', temp: 20 }, ].slice(0, args.days), }; }) );
// Convert to AI toolconst weatherTool = weatherCli.tool();Using with AI Models
Section titled “Using with AI Models”Pass the tool to any Vercel AI SDK function:
import { streamText } from 'ai';import { anthropic } from '@ai-sdk/anthropic';
const result = await streamText({ model: anthropic('claude-sonnet-4-20250514'), prompt: "What's the weather like in London?", tools: { weather: weatherTool, },});
for await (const chunk of result.textStream) { process.stdout.write(chunk);}The AI model will:
- Understand the available commands from the tool schema
- Choose the appropriate command (
weather current) - Provide the required args (
city: 'London') - Execute the command and use the response
Return Values
Section titled “Return Values”Your action handlers should return data that the AI can use:
.action(async (args) => { // Return structured data for the AI return { status: 'success', data: { /* ... */ }, };})The return value is passed back to the AI model, allowing it to incorporate the results into its response.
Multiple Tools
Section titled “Multiple Tools”You can provide multiple Padrone CLIs as separate tools:
const weatherCli = createPadrone('weather').command(/* ... */);const calendarCli = createPadrone('calendar').command(/* ... */);const notesCli = createPadrone('notes').command(/* ... */);
const result = await streamText({ model: yourModel, prompt: "Check the weather in Paris and add it to my calendar", tools: { weather: weatherCli.tool(), calendar: calendarCli.tool(), notes: notesCli.tool(), },});Tool Schema
Section titled “Tool Schema”The .tool() method generates a JSON schema from your Zod definitions. The descriptions you provide with .describe() help the AI understand how to use each argument:
z.object({ city: z.string().describe('The name of the city to get weather for'), units: z.enum(['celsius', 'fahrenheit']) .default('celsius') .describe('Temperature units (celsius or fahrenheit)'),})Good descriptions improve AI accuracy when selecting and using your tools.
Error Handling
Section titled “Error Handling”Handle errors gracefully so the AI can respond appropriately:
.action(async (args) => { try { const data = await fetchWeather(args.city); return { success: true, data }; } catch (error) { return { success: false, error: `Could not fetch weather for ${args.city}`, }; }})Real-World Example
Section titled “Real-World Example”Here’s a complete example of a task management CLI exposed as an AI tool:
import { streamText } from 'ai';import { createPadrone } from 'padrone';import * as z from 'zod/v4';
const tasks = createPadrone('tasks') .command('add', (c) => c .configure({ description: 'Add a new task' }) .arguments( z.object({ title: z.string().describe('Task title'), priority: z.enum(['low', 'medium', 'high']).default('medium'), dueDate: z.string().optional().describe('Due date (YYYY-MM-DD)'), }), { positional: ['title'] } ) .action((args) => { // Save task to database... return { id: 'task-1', ...args, status: 'created' }; }) ) .command('list', (c) => c .configure({ description: 'List all tasks' }) .arguments( z.object({ status: z.enum(['all', 'pending', 'completed']).default('all'), }) ) .action((args) => { // Fetch from database... return { tasks: [ { id: '1', title: 'Buy groceries', status: 'pending' }, { id: '2', title: 'Call mom', status: 'completed' }, ], }; }) ) .command('complete', (c) => c .configure({ description: 'Mark a task as completed' }) .arguments( z.object({ id: z.string().describe('Task ID'), }), { positional: ['id'] } ) .action((args) => { return { id: args.id, status: 'completed' }; }) );
// Use with AIconst result = await streamText({ model: yourModel, prompt: "Add a high priority task to buy milk, then show me all my tasks", tools: { tasks: tasks.tool(), }, maxSteps: 5, // Allow multiple tool calls});Compatibility
Section titled “Compatibility”Padrone’s AI integration requires:
- Vercel AI SDK 5.x or 6.x (peer dependency)
- Zod 3.25+ or 4.x
Install the AI SDK if you haven’t already:
npm install ai @ai-sdk/anthropic# or your preferred AI provider