Testing Guide
Rally uses Node.js’s built-in test runner (node:test) and strict assertions (node:assert/strict) for all testing. This aligns with Rally’s philosophy of using production-quality dependencies and avoiding external test frameworks.
Test Philosophy
Section titled “Test Philosophy”Rally’s testing strategy follows these core principles:
- Error paths before happy paths — Every test suite starts with error cases
- Assume every input is wrong — Test invalid inputs, missing arguments, malformed data
- Exit codes matter — Tests verify stderr output AND exit codes
- Isolation — Every test runs in a clean environment with no shared state
- No mocking libraries — Use
node:test’s built-in mock module
Running Tests
Section titled “Running Tests”Run all tests
Section titled “Run all tests”npm testThis runs three steps:
- JSX pre-build — Compiles
.jsxUI components to.jsvia esbuild - Non-UI tests — Runs all unit/integration tests in
test/*.test.js - UI tests — Runs Ink component tests in
test/ui/*.test.js
Run E2E tests
Section titled “Run E2E tests”npm run test:e2eEnd-to-end tests validate the full CLI workflow with real git operations.
Run all tests (unit + E2E)
Section titled “Run all tests (unit + E2E)”npm run test:allRun a single test file
Section titled “Run a single test file”node --test test/setup.test.jsRun with coverage (Node 20+)
Section titled “Run with coverage (Node 20+)”node --test --experimental-test-coverage ./test/*.test.jsWatch mode (Node 20+)
Section titled “Watch mode (Node 20+)”node --test --watch ./test/*.test.jsTest Structure
Section titled “Test Structure”Test Organization
Section titled “Test Organization”Tests are organized to mirror the module structure:
lib/setup.js → test/setup.test.jslib/onboard.js → test/onboard.test.jslib/dispatch.js → test/dispatch.test.jslib/config.js → test/config.test.jslib/worktree.js → test/worktree.test.jslib/ui/components/StatusMessage.jsx → test/ui/StatusMessage.test.jsTest Categories
Section titled “Test Categories”- Unit tests (
test/*.test.js) — Test individual modules and functions - UI tests (
test/ui/*.test.js) — Test Ink React components - E2E tests (
test/e2e/*.test.js) — Test the full CLI as a subprocess, organized by:cli/— CLI command behavior (help, status, sessions, onboard)journeys/actions/— User actions (clean, remove, continue, view-log, open-browser)journeys/navigation/— Dashboard navigation (selection, help, refresh)journeys/lifecycle/— Dispatch lifecycle (complete, cancel)journeys/display/— UI display logic (empty-state, truncation, status-icons, column-widths)journeys/dispatch/— Dispatch workflows (issue)
Test Framework Stack
Section titled “Test Framework Stack”| Component | Tool | Purpose |
|---|---|---|
| Test runner | node:test | Built-in Node.js test runner (Node 18+) |
| Assertions | node:assert/strict | Strict equality, deep equality, throws checks |
| Mocking | node:test mock module | Mock fs, child_process, environment |
| UI testing | ink-testing-library | Render Ink components, query output, simulate input |
| Fixtures | fs.mkdtempSync() | Temporary directories for git operations |
Writing Tests
Section titled “Writing Tests”Basic Test Structure
Section titled “Basic Test Structure”All tests use ESM imports:
import { test, describe, mock } from 'node:test';import assert from 'node:assert/strict';import { setup } from '../lib/setup.js';
describe('setup', () => { test('creates Rally directories when missing', () => { // Test implementation });
test('returns true when setup was needed', () => { // Test implementation });});Mocking Child Processes
Section titled “Mocking Child Processes”Rally shells out to git, gh, and npx. The project uses dependency injection for testability—functions accept optional _execFileSync parameters:
import { test, mock } from 'node:test';import assert from 'node:assert/strict';
test('dispatch: fetch issue metadata', (t) => { // Create a mock for execFileSync const mockExecFileSync = mock.fn((cmd, args) => { if (cmd === 'gh' && args[0] === 'issue') { return JSON.stringify({ number: 42, title: 'Add user authentication', labels: ['enhancement'] }); } throw new Error(`Unexpected command: ${cmd}`); });
// Pass mock via dependency injection const result = dispatch(options, { _execFileSync: mockExecFileSync });
assert.equal(mockExecFileSync.mock.callCount(), 1);});Mocking Filesystem
Section titled “Mocking Filesystem”Config operations and symlink creation use fs operations. Tests mock fs methods:
import { test, mock } from 'node:test';import assert from 'node:assert/strict';import fs from 'node:fs';
test('config: read missing config file', (t) => { mock.method(fs, 'readFileSync', (path) => { const err = new Error('ENOENT'); err.code = 'ENOENT'; throw err; });
assert.throws(() => readConfig(), { code: 'ENOENT' });});Temporary Directories for Git Tests
Section titled “Temporary Directories for Git Tests”Integration tests that need real git repositories use temporary directories:
import { test } from 'node:test';import fs from 'node:fs';import path from 'node:path';import os from 'node:os';import { execFileSync } from 'node:child_process';
test('worktree: create and remove', (t) => { const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'rally-test-'));
t.after(() => { fs.rmSync(tmpDir, { recursive: true, force: true }); });
// Initialize a real git repo execFileSync('git', ['init'], { cwd: tmpDir }); execFileSync('git', ['config', 'user.name', 'Test'], { cwd: tmpDir }); execFileSync('git', ['config', 'user.email', 'test@example.com'], { cwd: tmpDir });
// Test worktree operations});Environment Helpers
Section titled “Environment Helpers”Rally provides test helpers for managing temporary environments:
import { withTempRallyHome } from './helpers/temp-env.js';
test('creates config in RALLY_HOME', (t) => { const tempDir = withTempRallyHome(t); // tempDir is now set as RALLY_HOME and will be cleaned up automatically});Testing Ink Components
Section titled “Testing Ink Components”Basic Component Testing
Section titled “Basic Component Testing”import { describe, it, afterEach } from 'node:test';import assert from 'node:assert/strict';import React from 'react';import { render, cleanup } from 'ink-testing-library';import StatusMessage from '../../lib/ui/components/StatusMessage.js';
afterEach(() => { cleanup(); });
describe('StatusMessage', () => { it('renders success with green checkmark', () => { const { lastFrame } = render( React.createElement(StatusMessage, { type: 'success' }, 'Done') ); const output = lastFrame(); assert.ok(output.includes('✓'), 'should include ✓ icon'); assert.ok(output.includes('Done'), 'should include children text'); });});Testing Interactive Components
Section titled “Testing Interactive Components”For components with keyboard navigation, use stdin to simulate keypresses:
test('Dashboard: arrow keys navigate', async () => { const { lastFrame, stdin } = render( React.createElement(Dashboard, { dispatches: mockDispatches }) );
stdin.write('\x1B[B'); // Down arrow (ANSI escape sequence) await new Promise(resolve => setTimeout(resolve, 100)); // Wait for re-render
// Assert expected selection change});Dependency Injection
Section titled “Dependency Injection”Rally functions accept injectable dependencies via underscore-prefixed parameters for testing:
await dispatchIssue({ number: 42, _exec: mock.fn(() => JSON.stringify({ title: 'Fix bug' })),});This avoids mocking global modules and keeps tests isolated.
Coverage Goals
Section titled “Coverage Goals”- Minimum coverage: 80% across all modules
- Priority: Error paths over happy paths
- High-priority modules (
config.js,symlink.js,exclude.js) target 90%+
Measuring Coverage
Section titled “Measuring Coverage”node --test --experimental-test-coverage ./test/*.test.jsOutput shows line, branch, and function coverage per file.
CI Integration
Section titled “CI Integration”All tests run on every PR via GitHub Actions (.github/workflows/test.yml):
- All tests must pass — Zero tolerance for failing tests on PR branches
- Coverage must be ≥80% — PRs that drop coverage below threshold are blocked
- Platform — Tests run on Linux (Ubuntu) in CI; Windows/macOS testing is manual QA
Common Patterns
Section titled “Common Patterns”Test Naming
Section titled “Test Naming”Use descriptive names that explain the behavior or error condition:
// Goodtest('dispatch: error when issue not found', () => { /* ... */ });test('config: parse valid YAML with all keys', () => { /* ... */ });
// Badtest('test1', () => { /* ... */ });test('dispatch works', () => { /* ... */ });Cleanup in t.after()
Section titled “Cleanup in t.after()”Always clean up resources to avoid polluting subsequent test runs:
test('example test', (t) => { const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'rally-test-'));
t.after(() => { fs.rmSync(tmpDir, { recursive: true, force: true }); });
// Test code...});Environment Restoration
Section titled “Environment Restoration”When mutating global state, restore it manually:
test('platform detection', (t) => { const originalPlatform = process.platform; Object.defineProperty(process, 'platform', { value: 'win32' });
t.after(() => { Object.defineProperty(process, 'platform', { value: originalPlatform }); });
// Test code...});TDD Workflow
Section titled “TDD Workflow”For every new command or module:
- Write error case tests first — List all failure modes
- Run tests (all fail) — Red phase
- Implement minimal code to pass one test — Green phase
- Refactor — Clean up implementation
- Repeat for next error case
- Write happy path tests last — After error handling is solid
Philosophy: Test the unhappy path first. Assume every input is wrong. Verify exit codes and stderr, not just stdout. Break things on purpose so they don’t break by accident.