Skip to content

Testing

Padrone ships a testCli() utility that captures all I/O and provides a clean interface for assertions. It works with any test framework — bun:test, vitest, jest, node:test, etc.

Import from padrone/test:

import { testCli } from 'padrone/test';
import { describe, test, expect } from 'bun:test';
import { testCli } from 'padrone/test';
import { program } from './cli';
test('greet command', async () => {
const result = await testCli(program).run('greet World');
expect(result.args).toEqual({ name: 'World' });
expect(result.result).toBe('Hello, World!');
});

testCli() returns a fluent builder with these methods:

MethodDescription
.args(input)Set the CLI input string
.env(vars)Set environment variables
.prompt(answers)Provide mock answers for interactive prompts
.config(files)Provide mock config file contents
.stdin(data)Provide mock stdin data (piped input)
.run(input?)Execute the command and return the result
.repl(inputs)Run a REPL session with a sequence of inputs

All builder methods are chainable and .run() / .repl() return a Promise.

run() returns a TestCliResult:

PropertyTypeDescription
commandPadroneCommandThe matched command
argsunknownValidated arguments (undefined if validation failed)
resultunknownAction handler return value
issuesArray | undefinedValidation issues, if any
stdoutunknown[]All values passed to runtime.output()
stderrstring[]All strings passed to runtime.error()
errorunknownThe thrown error, if the command threw

Pass the full command path as the input string:

test('deploy to staging', async () => {
const result = await testCli(program).run('deploy --env staging');
expect(result.command.name).toBe('deploy');
expect(result.args).toEqual({ env: 'staging' });
});
test('nested command', async () => {
const result = await testCli(program).run('db migrate up --steps 3');
expect(result.command.name).toBe('up');
expect(result.args).toEqual({ steps: 3 });
});
test('reads API key from env', async () => {
const result = await testCli(program)
.env({ API_KEY: 'secret-123' })
.run('deploy --env production');
expect(result.args).toMatchObject({ apiKey: 'secret-123' });
});

Provide mock answers keyed by field name:

test('prompts for missing fields', async () => {
const result = await testCli(program)
.args('init')
.prompt({ name: 'myapp', template: 'react' })
.run();
expect(result.args).toEqual({ name: 'myapp', template: 'react' });
});

To test config file loading, use padroneConfig() with a custom loadConfig function that returns mock data:

import { padroneConfig } from 'padrone';
const program = createPadrone('myapp')
.extend(padroneConfig({
files: 'app.config.json',
loadConfig: () => ({ port: 9090, host: 'example.com' }),
}))
.command('serve', (c) =>
c
.arguments(z.object({ port: z.number(), host: z.string() }))
.action((args) => args)
);
test('loads config file values', async () => {
const result = await testCli(program).run('serve');
expect(result.args).toMatchObject({ port: 9090 });
});
test('reads piped input', async () => {
const result = await testCli(program)
.stdin('line1\nline2\nline3')
.run('process');
expect(result.result).toEqual(['line1', 'line2', 'line3']);
});
test('rejects invalid port', async () => {
const result = await testCli(program).run('serve --port abc');
expect(result.issues).toBeDefined();
expect(result.issues![0].message).toContain('Expected number');
});
test('fails on production without --force', async () => {
const result = await testCli(program).run('deploy --env production');
expect(result.error).toBeDefined();
expect(result.stderr).toContain('Production deploys require --force');
});

Captured output is stored as arrays — stdout for normal output, stderr for errors:

test('prints progress', async () => {
const result = await testCli(program).run('build');
expect(result.stdout).toContain('Building...');
expect(result.stdout).toContain('Done!');
expect(result.stderr).toHaveLength(0);
});

Use .repl() to simulate a sequence of REPL inputs:

test('REPL session', async () => {
const { results, stdout, stderr } = await testCli(program)
.repl(['greet World', 'add --a=2 --b=3']);
expect(results).toHaveLength(2);
expect(results[0].result).toBe('Hello, World!');
expect(results[1].result).toBe(5);
});

The TestReplResult contains:

PropertyTypeDescription
resultsArrayOne entry per executed command
stdoutunknown[]All output from the session
stderrstring[]All errors from the session

You can pass the input directly to .run() instead of using .args():

// These are equivalent:
await testCli(program).args('serve --port 8080').run();
await testCli(program).run('serve --port 8080');

Provide context via the run() preferences:

test('uses context', async () => {
const db = createMockDb();
const result = await testCli(program).run('users list', {
context: { db },
});
expect(result.result).toEqual(['alice', 'bob']);
});

Chain multiple builders for complex scenarios:

test('full integration', async () => {
const result = await testCli(program)
.args('deploy --env staging')
.env({ API_KEY: 'test-key', CI: 'true' })
.run();
expect(result.result).toEqual({ deployed: true, region: 'us-east-1' });
});