Skip to content

AI Integration

Padrone provides three ways to expose your CLI to AI assistants and external services:

  1. Model Context Protocol (MCP) (experimental) — Standard protocol supported by Claude, Cursor, Windsurf, and other AI tools. Works over HTTP or stdio.
  2. REST Server (experimental) — HTTP endpoints with OpenAPI docs. Each command becomes a route.
  3. Vercel AI SDK — Programmatic integration for building AI-powered applications.

Model Context Protocol (MCP) (experimental)

Section titled “Model Context Protocol (MCP) (experimental)”

Experimental: This API is experimental and may change in future releases.

The Model Context Protocol is an open standard that lets AI assistants discover and use your CLI commands as tools. Padrone implements the 2025-11-25 MCP spec with Streamable HTTP and stdio transports.

Every Padrone program has a built-in mcp command:

Terminal window
# Start an MCP server over HTTP (default)
myapp mcp
# Start over stdio (for local tool integration)
myapp mcp stdio
# Custom port and host
myapp mcp --port 8080 --host 0.0.0.0

When you run myapp mcp, Padrone:

  1. Collects all non-hidden commands that have an action or schema
  2. Exposes each as an MCP tool with a JSON Schema derived from your Zod definitions
  3. Handles the JSON-RPC protocol (initialize, tools/list, tools/call, ping, etc.)

For example, a CLI with greet and deploy commands becomes two MCP tools that AI assistants can discover and call.

You can also start the MCP server from code:

import { createPadrone } from 'padrone';
import * as z from 'zod/v4';
const program = createPadrone('myapp')
.configure({ version: '1.0.0' })
.command('greet', (c) =>
c
.arguments(z.object({ name: z.string().describe('Name to greet') }), { positional: ['name'] })
.action((args) => `Hello, ${args.name}!`)
);
// Start MCP server programmatically
await program.mcp({ port: 3000, host: '127.0.0.1' });

The .mcp() method and mcp command accept these options:

OptionTypeDefaultDescription
transport'http' | 'stdio''http'Transport mode
portnumber3000HTTP port
hoststring'127.0.0.1'HTTP host
basePathstring'/mcp'HTTP endpoint path
namestringprogram nameServer name
versionstringprogram versionServer version
corsstring | false'*'CORS allowed origin, or false to disable

Streamable HTTP (default) — Starts an HTTP server. Responds with application/json or text/event-stream (SSE) based on the client’s Accept header, per the MCP spec. Includes session management with MCP-Session-Id headers.

stdio — Communicates over stdin/stdout with newline-delimited JSON. Use this when the AI tool launches your CLI as a subprocess (e.g., Claude Desktop, mcp-cli).

To disable the built-in mcp command in cli():

program.cli({ mcp: false });

Commands are exposed as MCP tools using dot-separated names: nested.sub for a subcommand sub under nested. This follows the MCP tool naming spec ([A-Za-z0-9_\-\.]).

Use .describe() on your Zod fields and .configure({ description }) on commands — these become the tool descriptions that AI models read to understand your CLI.


Experimental: This API is experimental and may change in future releases.

Padrone can expose your CLI as a REST API with automatic OpenAPI documentation. Each command becomes an HTTP endpoint.

Every Padrone program has a built-in serve command:

Terminal window
# Start a REST server (default port 3000)
myapp serve
# Custom port and host
myapp serve --port 8080 --host 0.0.0.0
# Custom base path
myapp serve --base-path /api/

When you run myapp serve, Padrone:

  1. Collects all non-hidden commands that have an action or schema
  2. Maps each to a URL path (e.g., users list/users/list)
  3. For each request, converts query params (GET) or JSON body (POST) to CLI flags and calls eval()
  4. Returns structured JSON responses

Commands configured with mutation: true only accept POST requests. This is useful for commands that create, update, or delete data:

const program = createPadrone('api')
.command('users', (c) =>
c
.command('list', (c) =>
c.action(() => db.users.findMany())
)
.command('create', (c) =>
c
.configure({ mutation: true })
.arguments(z.object({ name: z.string(), email: z.string() }))
.action((args) => db.users.create(args))
)
);
await program.serve({ port: 3000 });
// GET /users/list → 200 OK
// POST /users/create { "name": "Alice" } → 200 OK
// GET /users/create?name=Alice → 405 Method Not Allowed

The mutation flag also affects MCP (sets annotations.destructiveHint) and Vercel AI SDK (defaults needsApproval to true).

await program.serve({
port: 3000,
host: '127.0.0.1',
basePath: '/api/',
cors: 'https://example.com',
});
OptionTypeDefaultDescription
portnumber3000HTTP port
hoststring'127.0.0.1'HTTP host
basePathstring'/'Base path prefix for all routes
corsstring | false'*'CORS allowed origin, or false to disable
builtinsobjectall trueToggle built-in endpoints (health, help, schema, docs)
onRequestfunctionHook to run before each request (auth, rate-limiting)
onErrorfunctionCustom error response handler
EndpointDescription
GET /_healthReturns { status: "ok" }
GET /_helpProgram help (JSON or markdown based on Accept header)
GET /_help/:pathCommand-specific help
GET /_schemaJSON Schema map of all commands
GET /_schema/:pathJSON Schema for a single command
GET /_docsInteractive API documentation (powered by Scalar)
GET /_openapiRaw OpenAPI 3.1.0 JSON spec

Success (200):

{ "ok": true, "result": <action return value> }

Validation error (400):

{ "ok": false, "error": "validation", "issues": [{ "path": ["name"], "message": "Required" }] }

Not found (404):

{ "ok": false, "error": "not_found", "message": "Command not found: users update" }

To disable the built-in serve command in cli():

program.cli({ serve: false });

Padrone provides first-class support for the Vercel AI SDK, allowing you to expose your CLI commands as tools that AI models can use.

The .tool() method converts your Padrone program into a Vercel AI SDK compatible tool. This lets AI assistants:

  • Understand your CLI’s capabilities through the schema
  • Execute commands with proper type validation
  • Receive structured responses
import { streamText } from 'ai';
import { createPadrone } from 'padrone';
import * as z from 'zod/v4';
// Define your CLI
const weatherCli = createPadrone('weather')
.command('current', (c) =>
c
.configure({ description: 'Get current weather for a city' })
.arguments(
z.object({
city: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).default('celsius'),
}),
{ positional: ['city'] }
)
.action(async (args) => {
// Fetch weather data...
return {
city: args.city,
temperature: 22,
units: args.units,
condition: 'Sunny',
};
})
)
.command('forecast', (c) =>
c
.configure({ description: 'Get weather forecast' })
.arguments(
z.object({
city: z.string().describe('City name'),
days: z.number().default(3).describe('Number of days'),
}),
{ positional: ['city'] }
)
.action(async (args) => {
return {
city: args.city,
forecast: [
{ day: 'Mon', temp: 22 },
{ day: 'Tue', temp: 24 },
{ day: 'Wed', temp: 20 },
].slice(0, args.days),
};
})
);
// Convert to AI tool
const weatherTool = weatherCli.tool();

Pass the tool to any Vercel AI SDK function:

import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
const result = await streamText({
model: anthropic('claude-sonnet-4-20250514'),
prompt: "What's the weather like in London?",
tools: {
weather: weatherTool,
},
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}

The AI model will:

  1. Understand the available commands from the tool schema
  2. Choose the appropriate command (weather current)
  3. Provide the required args (city: 'London')
  4. Execute the command and use the response

Your action handlers should return data that the AI can use:

.action(async (args) => {
// Return structured data for the AI
return {
status: 'success',
data: { /* ... */ },
};
})

The return value is passed back to the AI model, allowing it to incorporate the results into its response.

You can provide multiple Padrone CLIs as separate tools:

const weatherCli = createPadrone('weather').command(/* ... */);
const calendarCli = createPadrone('calendar').command(/* ... */);
const notesCli = createPadrone('notes').command(/* ... */);
const result = await streamText({
model: yourModel,
prompt: "Check the weather in Paris and add it to my calendar",
tools: {
weather: weatherCli.tool(),
calendar: calendarCli.tool(),
notes: notesCli.tool(),
},
});

The .tool() method generates a JSON schema from your Zod definitions. The descriptions you provide with .describe() help the AI understand how to use each argument:

z.object({
city: z.string().describe('The name of the city to get weather for'),
units: z.enum(['celsius', 'fahrenheit'])
.default('celsius')
.describe('Temperature units (celsius or fahrenheit)'),
})

Good descriptions improve AI accuracy when selecting and using your tools.

Handle errors gracefully so the AI can respond appropriately:

.action(async (args) => {
try {
const data = await fetchWeather(args.city);
return { success: true, data };
} catch (error) {
return {
success: false,
error: `Could not fetch weather for ${args.city}`,
};
}
})

Here’s a complete example of a task management CLI exposed as an AI tool:

import { streamText } from 'ai';
import { createPadrone } from 'padrone';
import * as z from 'zod/v4';
const tasks = createPadrone('tasks')
.command('add', (c) =>
c
.configure({ description: 'Add a new task' })
.arguments(
z.object({
title: z.string().describe('Task title'),
priority: z.enum(['low', 'medium', 'high']).default('medium'),
dueDate: z.string().optional().describe('Due date (YYYY-MM-DD)'),
}),
{ positional: ['title'] }
)
.action((args) => {
// Save task to database...
return { id: 'task-1', ...args, status: 'created' };
})
)
.command('list', (c) =>
c
.configure({ description: 'List all tasks' })
.arguments(
z.object({
status: z.enum(['all', 'pending', 'completed']).default('all'),
})
)
.action((args) => {
// Fetch from database...
return {
tasks: [
{ id: '1', title: 'Buy groceries', status: 'pending' },
{ id: '2', title: 'Call mom', status: 'completed' },
],
};
})
)
.command('complete', (c) =>
c
.configure({ description: 'Mark a task as completed' })
.arguments(
z.object({
id: z.string().describe('Task ID'),
}),
{ positional: ['id'] }
)
.action((args) => {
return { id: args.id, status: 'completed' };
})
);
// Use with AI
const result = await streamText({
model: yourModel,
prompt: "Add a high priority task to buy milk, then show me all my tasks",
tools: {
tasks: tasks.tool(),
},
maxSteps: 5, // Allow multiple tool calls
});

Padrone’s AI integration requires:

  • Vercel AI SDK 5.x or 6.x (peer dependency)
  • Zod 3.25+ or 4.x

Install the AI SDK if you haven’t already:

Terminal window
npm install ai @ai-sdk/anthropic
# or your preferred AI provider