This commit is contained in:
alex wiesner
2026-03-13 13:28:20 +00:00
parent 95974224f8
commit cb208a73c4
62 changed files with 1105 additions and 3490 deletions

View File

@@ -0,0 +1,29 @@
---
name: brainstorming
description: Planner-led discovery workflow for clarifying problem shape, options, and decision-ready direction
permalink: opencode-config/skills/brainstorming/skill
---
# Brainstorming
Use this skill when requests are unclear, options are broad, or design tradeoffs are unresolved.
## Workflow
1. Clarify objective, constraints, and non-goals.
2. Generate multiple viable approaches (not one-path thinking).
3. Compare options by risk, complexity, verification cost, and reversibility.
4. Identify unknowns that need research before execution.
5. Converge on a recommended direction with explicit rationale.
## Planner Ownership
- Keep brainstorming in planning mode; do not start implementation.
- Use subagents for independent research lanes when needed.
- Translate outcomes into memory-backed planning artifacts (`plans/<slug>`, findings/risks).
## Output
- Short options table (approach, pros, cons, risks).
- Recommended path and why.
- Open questions that block approval.

View File

@@ -12,18 +12,21 @@ Use this skill when you need to add or revise an agent definition in this repo.
- **Agents** define runtime behavior and permissions in `agents/*.md`.
- **Skills** are reusable instruction modules under `skills/*/SKILL.md`.
- Do not treat agent creation as skill creation; each has different files and checks.
- Do not treat agent creation as skill creation; each has different files, checks, and ownership.
## Source of Truth
1. Agent definition file: `agents/<agent-name>.md`
2. Dispatch permissions for new dispatchable agents: `agents/lead.md`
- `permission.task.<agent-name>: allow` must be present for lead dispatch.
3. Human-readable roster: `AGENTS.md`
- Keep the roster table synchronized with actual agent files.
4. Runtime overrides: `opencode.jsonc`
- May override/disable agent behavior.
- Does **not** register a new agent.
2. Operating roster and workflow contract: `AGENTS.md`
3. Runtime overrides and provider policy: `opencode.jsonc`
4. Workflow entrypoints: `commands/*.md`
Notes:
- This repo uses two primary agents: `planner` and `builder`.
- Dispatch permissions live in the primary agent that owns the subagent, not in a central dispatcher.
- `planner` may dispatch only `researcher`, `explorer`, and `reviewer`.
- `builder` may dispatch only `coder`, `tester`, `reviewer`, and `librarian`.
## Agent File Conventions
@@ -31,36 +34,40 @@ For `agents/<agent-name>.md`:
- Use frontmatter first, then concise role instructions.
- Keep tone imperative and operational.
- Include only permissions and behavior needed for the role.
- Align with existing files such as `agents/lead.md` and `agents/coder.md`.
- Define an explicit `model` for every agent and keep it on a GitHub Copilot model.
- Use only explicit `allow` or `deny` permissions; do not use `ask`.
- Include only the tools and permissions needed for the role.
- Keep instructions aligned with the planner -> builder contract in `AGENTS.md`.
Typical frontmatter fields in this repo include:
- `description`
- `mode`
- `model` (for subagents where needed)
- `model`
- `temperature`
- `steps`
- `tools`
- `permission`
- `permalink`
Mirror nearby agent files instead of inventing new metadata patterns.
Note: `agents/lead.md` is the only `mode: primary` agent. New agents should normally mirror a comparable subagent and use `mode: subagent` with an explicit `model`.
Mirror nearby agent files instead of inventing new metadata patterns.
## Practical Workflow (Create or Update)
1. Inspect `agents/lead.md` and at least one similar `agents/*.md` file.
1. Inspect the relevant primary agent file and at least one comparable peer in `agents/*.md`.
2. Create or edit `agents/<agent-name>.md` with matching local structure.
3. If the agent must be dispatchable, add `permission.task.<agent-name>: allow` in `agents/lead.md`.
4. Update `AGENTS.md` roster entry to match the new/updated agent.
5. Review `opencode.jsonc` for conflicting overrides/disable flags.
3. If the agent is a subagent, update the owning primary agent's `permission.task` allowlist.
4. Update `AGENTS.md` so the roster, responsibilities, and workflow rules stay synchronized.
5. Review `commands/*.md` if the new agent changes how `/init`, `/plan`, `/build`, or `/continue` should behave.
6. Review `opencode.jsonc` for conflicting overrides, disable flags, or provider drift.
## Manual Verification Checklist (No Validation Script)
Run this checklist before claiming completion:
- [ ] `agents/<agent-name>.md` exists and frontmatter is valid/consistent with peers.
- [ ] Agent instructions clearly define role, scope, and constraints.
- [ ] `agents/lead.md` includes `permission.task.<agent-name>: allow` if dispatch is required.
- [ ] `AGENTS.md` roster row exists and matches agent name/role/model.
- [ ] `opencode.jsonc` does not unintentionally disable or override the new agent.
- [ ] `agents/<agent-name>.md` exists and frontmatter is valid and consistent with peers.
- [ ] Agent instructions clearly define role, scope, escalation rules, and constraints.
- [ ] The owning primary agent includes the correct `permission.task` rule for the subagent.
- [ ] `AGENTS.md` roster row exists and matches the agent name, role, and model.
- [ ] `commands/*.md` and `opencode.jsonc` still reflect the intended workflow.
- [ ] Terminology stays consistent: agents in `agents/*.md`, skills in `skills/*/SKILL.md`.

View File

@@ -42,6 +42,7 @@ permalink: opencode-config/skills/<skill-name>/skill
- Lead with when to load and the core workflow.
- Prefer short checklists over long prose.
- Include only repo-relevant guidance.
- Keep the planner/builder operating model in mind when a skill touches workflow behavior.
## Companion Notes (`*.md` in skill folder)
@@ -62,3 +63,22 @@ Add companion markdown files only when detail would bloat `SKILL.md` (examples,
- file exists at `skills/<name>/SKILL.md`
- folder name == frontmatter `name`
- no OpenAI/Codex-only artifacts introduced
7. If the skill changes agent workflow or command behavior:
- Update the **Skills** table, **Agent Skill-Loading Contract**, and **TDD Default Policy** in `AGENTS.md`.
- Confirm `commands/*.md` and any affected `agents/*.md` prompts stay aligned.
- If the skill involves parallelization, verify it enforces safe-parallelization rules (no parallel mutation on shared files, APIs, schemas, or verification steps).
- If the skill involves code changes, verify it references the TDD default policy and its narrow exceptions.
## Language/Ecosystem Skill Pattern
When adding a new language or ecosystem skill (e.g., `rust-development`, `go-development`), follow this template:
1. **Name**: `<language>-development` (kebab-case).
2. **Load trigger**: presence of the language's project file(s) or source files as primary source.
3. **Defaults table**: one row per concern — package manager, linter/formatter, test runner, type checker (if applicable).
4. **Core workflow**: numbered steps for bootstrap, lint, format, test, add-deps, and any lock/check step.
5. **Conventions**: 35 bullets on config file preferences, execution patterns, and version pinning.
6. **Docker integration**: one paragraph on base image and cache strategy.
7. **Red flags**: 35 bullets on common anti-patterns.
8. **AGENTS.md updates**: add the skill to the **Ecosystem Skills** table and add load triggers for `planner`, `builder`, `coder`, and `tester` in the **Agent Skill-Loading Contract**.
9. **Agent prompt updates**: add the skill trigger to `agents/planner.md`, `agents/builder.md`, `agents/coder.md`, and `agents/tester.md`.

View File

@@ -1,50 +1,38 @@
---
name: dispatching-parallel-agents
description: Dispatch focused subagents in parallel for genuinely independent problem domains
description: Safely parallelize independent lanes with isolation checks, explicit ownership, and single-agent integration
permalink: opencode-config/skills/dispatching-parallel-agents/skill
---
# Dispatching Parallel Agents
## Core Value
Use this skill before parallel fan-out.
When there are 2+ genuinely independent failures/problem domains, dispatch one focused agent per domain concurrently instead of serial investigation.
## Isolation Test (Required)
## When to Use
Parallel lanes are allowed only when lanes do not share:
Use when all are true:
- You have multiple failures across separate domains.
- Each domain can be investigated without shared context/state.
- Agents can work without touching the same files or interfering.
- files under active mutation
- APIs or schemas being changed
- verification steps that must run in sequence
Do **not** parallelize when:
- Failures may share a root cause.
- You still need a single root-cause investigation first.
- Agents would edit the same area and conflict.
If any of these overlap, run sequentially.
## Dispatch Pattern
## Workflow
1. Split failures into independent domains.
2. Write one prompt per domain.
3. Dispatch subagents concurrently with the `task` tool.
4. Review results, integrate non-conflicting fixes, then run full verification.
1. Define lanes with explicit scope, inputs, and outputs.
2. Assign a single integrator (usually builder) for merge and final validation.
3. Require each lane to provide verification evidence, not just code output.
4. Integrate in dependency order.
5. Run final end-to-end verification after integration.
Example dispatch intent (tool-level wording):
- `task`: "Investigate and fix failures in <domain A only>"
- `task`: "Investigate and fix failures in <domain B only>"
## Planner/Builder Expectations
## Prompt Quality Requirements
- Planner: design parallel lanes only when isolation is demonstrable.
- Builder: load this skill before fan-out and enforce lane boundaries strictly.
Each subagent prompt must include:
1. **One clear problem domain** (single file/subsystem/failure cluster).
2. **Self-contained context** (errors, failing tests, relevant constraints).
3. **Explicit constraints** (what not to change; scope boundaries).
4. **Explicit expected output** (root cause + files changed + validation run).
## Red Flags
## Verification and Quality Pipeline
After subagents return:
1. Check for overlapping edits or assumption conflicts.
2. Run required verification for the integrated result (not partial checks).
3. Send the feature through reviewer, then tester when behavior is user-facing.
4. Do not claim completion without fresh verification evidence.
- Two lanes editing the same contract.
- Shared test fixtures causing non-deterministic outcomes.
- Missing integrator ownership.

View File

@@ -1,69 +0,0 @@
---
name: doc-coverage
description: Documentation coverage checklist and update procedures — load when completing
a feature or change set
permalink: opencode-config/skills/doc-coverage/skill
---
## When to Use
Load this skill when a feature or change set is nearing completion. Documentation is a **completion gate** — a task is not done until docs are handled.
## Coverage Checklist
For every completed change set, verify documentation coverage:
### 1. README
- [ ] Does the README reflect the current state of the project?
- [ ] Are new features, commands, or configuration options documented?
- [ ] Are removed features cleaned up from the README?
### 2. Docs directory (`docs/*`)
- [ ] Are there relevant docs files that need updating?
- [ ] Do new features need their own doc page?
- [ ] Are API changes reflected in API documentation?
### 3. Instruction File
Check `AGENTS.md` as the single instruction source:
- Does `AGENTS.md` exist and contain current project info?
- If legacy tool-specific instruction files exist, has their durable repo guidance been consolidated into `AGENTS.md`?
- Does the instruction file contain:
- Project purpose and overview
- Tech stack and architecture
- Coding conventions
- Build/test/lint commands
- Project structure
- Is the instruction file in sync with current project state?
**Anti-patterns:**
- Mirrored instruction files requiring parallel maintenance
- Instruction file is stale or empty
- Instruction file duplicates basic-memory project note content (plans, research)
### 4. Inline documentation
- [ ] Are complex functions/components documented with comments explaining **why**, not **what**?
- [ ] Are public APIs documented with parameter descriptions?
## Update Procedure
1. Review the list of changed files and their purpose.
2. Identify which documentation files are affected.
3. Read the current state of each affected doc file.
4. Update docs to reflect the implemented changes — keep descriptions accurate and concise.
5. If a change removes functionality, remove or update the corresponding documentation.
6. If creating a new feature, add documentation in the most appropriate location.
## Anti-patterns
- **Never leave stale docs.** If you changed behavior, the docs must change too.
- **Never create placeholder docs.** "TODO: document this" is not documentation.
- **Never duplicate content across doc files.** Link to the canonical source instead.
- **Never wait for the user to ask.** If docs need updating, update them proactively as part of the change set.
## Delegation
- The **librarian** subagent is the specialist for documentation work.
- Lead should delegate doc coverage review to librarian after coder completes implementation.
- Librarian reads the changes, identifies doc gaps, and writes/updates documentation.

View File

@@ -0,0 +1,37 @@
---
name: docker-container-management
description: Reusable Docker container workflow for build, test, and dev tasks in containerized repos
permalink: opencode-config/skills/docker-container-management/skill
---
# Docker Container Management
Load this skill when a repo uses Docker/docker-compose for builds, tests, or local dev, or when a task involves containerized workflows.
## Core Workflow
1. **Detect** — look for `Dockerfile`, `docker-compose.yml`/`compose.yml`, or `.devcontainer/` in the repo root.
2. **Prefer compose** — use `docker compose` (v2 CLI) over raw `docker run` when a compose file exists.
3. **Ephemeral containers** — default to `--rm` for one-off commands. Avoid leaving stopped containers behind.
4. **Named volumes over bind-mounts** for caches (e.g., package manager caches). Use bind-mounts only for source code.
5. **No host-path writes outside the repo** — all volume mounts must target paths inside the repo root or named volumes. This preserves `external_directory: deny`.
## Path and Volume Constraints
- Mount the repo root as the container workdir: `-v "$(pwd):/app" -w /app`.
- Never mount host paths outside the repository (e.g., `~/.ssh`, `/var/run/docker.sock`) unless the plan explicitly approves it with a stated reason.
- If root-owned artifacts appear after container runs, document cleanup steps (see `main/knowledge/worktree-cleanup-after-docker-owned-artifacts`).
## Agent Guidance
- **planner**: Use Docker during planning for context gathering and inspection (e.g., `docker compose config`, `docker ps`, `docker image ls`, `docker network ls`, checking container health or logs). Do not run builds, installs, tests, deployments, or any implementation-level commands — those belong to builder/tester/coder.
- **builder/coder**: Run builds and install steps inside containers. Prefer `docker compose run --rm <service> <cmd>` for one-off tasks.
- **tester**: Run test suites inside the same container environment used by CI. Capture container exit codes as verification evidence.
- **coder**: When writing Dockerfiles or compose files, keep layers minimal, pin base image tags, and use multi-stage builds when the final image ships.
## Red Flags
- `docker run` without `--rm` in automation scripts.
- Bind-mounting sensitive host paths (`/etc`, `~/.config`, `/var/run/docker.sock`).
- Building images without a `.dockerignore`.
- Using `latest` tag for base images in production Dockerfiles.

View File

@@ -1,64 +0,0 @@
---
name: executing-plans
description: Execute an approved implementation plan from basic-memory with task tracking,
verification, and blocker handling
permalink: opencode-config/skills/executing-plans/skill
---
# Executing Plans
## Overview
Use this skill when a plan already exists in local basic-memory `plans/` notes and the goal is to execute it safely and completely.
Core workflow:
- Read the plan
- Critically review before starting
- Create or update the plan note checklist in basic-memory
- Execute tasks one by one
- Run the verifications specified by the plan
- Stop on blockers instead of guessing
## Step 1: Load and Review the Plan
1. Read the target note from basic-memory project `plans/` (for example, `plans/<feature-name>`).
2. Review the plan critically before coding.
3. Identify gaps, contradictions, unclear steps, or missing prerequisites.
4. If concerns exist, raise them before implementation.
5. If the plan is sound, create/update the plan note checklist in basic-memory to mirror executable tasks.
## Step 2: Execute Tasks Sequentially
For each task in order:
1. Mark one task as `in_progress`.
2. Follow the plan steps exactly for that task.
3. Run the verifications specified for that task (tests/checks/manual verification).
4. If verification passes, mark task `completed` and continue.
5. Keep only one active task at a time unless the plan explicitly allows parallel work.
## Step 3: Complete the Branch Workflow
After all tasks are completed and verified:
- Use `git-workflow` for branch finish options (merge, PR, keep for later, or discard with confirmation).
- Record implementation outcomes back to the relevant basic-memory `plans/` note when requested.
## Blocker Rules (Stop Conditions)
Stop execution immediately and ask for clarification when:
- A blocker prevents progress (missing dependency, failing prerequisite, unavailable environment)
- A plan instruction is unclear or conflicts with other instructions
- Plan gaps prevent safe implementation
- Required verification repeatedly fails
Do not guess through blockers.
## Worktree and Branch Safety
- Follow worktree-first conventions: execute implementation from the feature worktree, not the primary tree on a base branch.
- Never start implementation directly on `main`/`master` (or the repository's active base branch) without explicit user consent.
## Related Skills
- `subagent-driven-development` — use when the work should be split across specialized agents
- `writing-plans` — use when the plan must be created or rewritten before execution
- `git-workflow` — use to complete branch/PR flow after implementation

View File

@@ -1,128 +0,0 @@
---
name: git-workflow
description: Procedures for git commits, worktrees, branches, and GitHub PRs — load
before any git operation
permalink: opencode-config/skills/git-workflow/skill
---
## Git Commit Procedure
1. Run `git status` to see all untracked and modified files.
2. Run `git diff` (staged + unstaged) to review changes that will be committed.
3. Run `git log --oneline -5` to see recent commit message style.
4. Draft a Conventional Commit message (`feat:`, `fix:`, `chore:`, `refactor:`, `docs:`, `test:`):
- Focus on **why**, not **what**.
- 1-2 sentences max.
- Match the repository's existing style.
5. Check for secrets: do NOT commit `.env`, credentials, or key files.
6. The managed per-repo basic-memory project directory is `<repo>/.memory/`; do not edit managed `.memory/*` files directly. Older repo-local memory workflow artifacts (including `.memory.legacy/` and legacy contents from prior workflows) are non-authoritative and should not be edited unless explicitly migrating historical content into basic-memory.
7. Stage relevant files: `git add <files>` (not blindly `git add .`).
8. Commit: `git commit -m "<message>"`.
9. Run `git status` after commit to verify success.
## Git Worktree Procedure
### When to use worktrees:
- Always use a worktree for new feature work to keep `main` clean.
- **One worktree per independent workstream.** If implementing multiple unrelated features, create separate worktrees for each.
### Deciding on worktree count:
- **1 worktree**: Single feature, or 2 tightly-coupled features sharing state/files.
- **2+ worktrees**: Features that touch different domains, have different risk profiles, or could ship independently. Each gets its own worktree, branch, and PR.
### Creating a worktree for a new feature:
```bash
# From project root
mkdir -p .worktrees
git check-ignore -q .worktrees || { printf "Add .worktrees/ to .gitignore before continuing.\n"; exit 1; }
git worktree add .worktrees/<feature-name> -b <branch-name>
```
Before starting feature implementation in the new worktree:
- Verify `.worktrees/` is the project-local location and ignored by git before creating or reusing worktrees.
- Run project-declared setup/config scripts if the worktree needs dependencies or generated files.
- Run a baseline verification (project-declared check/test/lint scripts) to confirm the branch is clean before making changes.
- If baseline verification fails, stop and diagnose the environment or branch state before coding.
### Creating multiple worktrees for independent workstreams:
```bash
# From project root — create all worktrees upfront
git worktree add .worktrees/<workstream-1> -b feat/<workstream-1>
git worktree add .worktrees/<workstream-2> -b feat/<workstream-2>
```
### Working in a worktree:
- All file edits, test runs, and dev server starts must use the worktree path.
- Example: `workdir="/path/to/project/.worktrees/my-feature"` for all bash commands.
- **Each coder invocation must target a specific worktree** — never mix worktrees in one coder dispatch.
### After implementation and tests pass: choose one finish path
1. **Merge locally**
- Merge `<branch-name>` into your tracked/current base branch, then remove the feature worktree.
- If the branch is fully integrated and no longer needed, delete it.
2. **Push and open PR**
- Push `<branch-name>` and create a PR to the base branch using the GitHub PR procedure below.
- Keep the worktree until review/merge is complete, then remove the worktree and delete the merged branch.
3. **Keep branch/worktree for later**
- Leave branch and worktree in place when work is paused or awaiting input.
- Record the next step and expected resume point so cleanup is not forgotten.
4. **Discard work (destructive)**
- Only for work you explicitly want to throw away.
- Require typed confirmation before running destructive commands: `Type exactly: DISCARD <branch-name>`.
- After confirmation, remove the worktree and force-delete the unmerged branch with `git branch -D <branch-name>`; this cannot be undone from git alone once commits are unreachable.
### Example local-merge cleanup flow:
```bash
# From the primary working tree
git checkout <base-branch>
git merge <branch-name>
git worktree remove .worktrees/<feature-name>
git branch -d <branch-name> # optional cleanup when fully merged
```
### Completing multiple worktrees (independent PRs):
Complete and merge each worktree independently. If workstream-2 depends on workstream-1, merge workstream-1 first, then rebase workstream-2 before merging.
## GitHub PR Procedure
### Push and create PR:
```bash
# Push branch
git push -u origin <branch-name>
# Create PR with heredoc body
gh pr create --title "<title>" --body "$(cat <<'EOF'
## Summary
- <bullet 1>
- <bullet 2>
## Changes
- <file/area>: <what changed>
## Testing
- <how it was validated>
EOF
)"
```
### Check CI status:
```bash
gh run list # List recent workflow runs
gh run view <run-id> # View specific run details
gh pr checks <pr-number> # Check statuses on a PR
```
### Issue operations:
```bash
gh issue list # List open issues
gh issue view <number> # View specific issue
gh issue comment <number> -b "<comment>" # Add comment
```
## Safety Rules
- **Never `git push --force` to `main`/`master`** unless the user explicitly confirms.
- **Never skip hooks** (`--no-verify`) unless the user explicitly requests it.
- **Never `git commit --amend`** unless: (1) explicitly requested OR pre-commit hook auto-modified files, (2) HEAD was created in this session, AND (3) commit has NOT been pushed to remote.
- If commit fails due to pre-commit hook, fix the issue and create a NEW commit.

View File

@@ -0,0 +1,45 @@
---
name: javascript-typescript-development
description: JS/TS ecosystem defaults and workflows using bun for runtime/packaging and biome for linting/formatting
permalink: opencode-config/skills/javascript-typescript-development/skill
---
# JavaScript / TypeScript Development
Load this skill when a repo or lane involves JS/TS code (presence of `package.json`, `tsconfig.json`, or `.ts`/`.tsx`/`.js`/`.jsx` files as primary source).
## Defaults
| Concern | Tool | Notes |
| --- | --- | --- |
| Runtime + package manager | `bun` | Replaces node+npm/yarn/pnpm for most tasks |
| Linting + formatting | `biome` | Replaces eslint+prettier |
| Test runner | `bun test` | Built-in; use vitest/jest only if repo already configures them |
| Type checking | `tsc --noEmit` | Always run before completion claims |
## Core Workflow
1. **Bootstrap**`bun install` to install dependencies.
2. **Lint**`bunx biome check .` before committing.
3. **Format**`bunx biome format . --write` (or `--check` in CI).
4. **Test**`bun test` with the repo's existing config. Follow TDD default policy.
5. **Add dependencies**`bun add <pkg>` (runtime) or `bun add -D <pkg>` (dev).
6. **Type check**`bunx tsc --noEmit` for TS repos.
## Conventions
- Prefer `biome.json` for lint/format config. Do not add `.eslintrc` or `.prettierrc` unless the repo already uses them.
- Use `bun run <script>` to invoke `package.json` scripts.
- Prefer ES modules (`"type": "module"` in `package.json`).
- Pin Node/Bun version via `.node-version` or `package.json` `engines` when deploying.
## Docker Integration
When the repo runs JS/TS inside Docker, use `oven/bun` as the base image. Mount a named volume for `node_modules` or use `bun install --frozen-lockfile` in CI builds.
## Red Flags
- Using `npm`/`yarn`/`pnpm` when `bun` is available and the project uses it.
- Running `eslint` or `prettier` when `biome` is configured.
- Missing `bun.lockb` after dependency changes.
- Skipping `tsc --noEmit` in TypeScript repos.

View File

@@ -0,0 +1,44 @@
---
name: python-development
description: Python ecosystem defaults and workflows using uv for packaging and ruff for linting/formatting
permalink: opencode-config/skills/python-development/skill
---
# Python Development
Load this skill when a repo or lane involves Python code (presence of `pyproject.toml`, `setup.py`, `requirements*.txt`, or `.py` files as primary source).
## Defaults
| Concern | Tool | Notes |
| --- | --- | --- |
| Package/venv management | `uv` | Replaces pip, pip-tools, and virtualenv |
| Linting + formatting | `ruff` | Replaces flake8, isort, black |
| Test runner | `pytest` | Unless repo already uses another runner |
| Type checking | `pyright` or `mypy` | Use whichever the repo already configures |
## Core Workflow
1. **Bootstrap**`uv sync` (or `uv pip install -e ".[dev]"`) to create/refresh the venv.
2. **Lint**`ruff check .` then `ruff format --check .` before committing.
3. **Test**`pytest` with the repo's existing config. Follow TDD default policy.
4. **Add dependencies**`uv add <pkg>` (runtime) or `uv add --dev <pkg>` (dev). Do not edit `pyproject.toml` dependency arrays by hand.
5. **Lock**`uv lock` after dependency changes.
## Conventions
- Prefer `pyproject.toml` over `setup.py`/`setup.cfg` for new projects.
- Keep `ruff` config in `pyproject.toml` under `[tool.ruff]`.
- Use `uv run <cmd>` to execute tools inside the managed venv without activating it.
- Pin Python version via `.python-version` or `pyproject.toml` `requires-python`.
## Docker Integration
When the repo runs Python inside Docker, install dependencies with `uv pip install` inside the container. Mount a named volume for the uv cache to speed up rebuilds.
## Red Flags
- Using `pip install` directly instead of `uv`.
- Running `black` or `isort` when `ruff` is configured.
- Missing `uv.lock` after dependency changes.
- Editing dependency arrays in `pyproject.toml` by hand instead of using `uv add`.

View File

@@ -1,53 +0,0 @@
---
name: receiving-code-review
description: Evaluate review feedback technically before acting; fix correct items and push back on incorrect ones with codebase evidence
permalink: opencode-config/skills/receiving-code-review/skill
---
# Receiving Code Review Feedback
## Core Workflow
When feedback arrives, follow this order:
1. **Read fully** before reacting.
2. **Understand the actual requirement** (restate it or ask a clarifying question).
3. **Verify against codebase reality** (current behavior, constraints, tests, compatibility).
4. **Decide whether feedback is correct for this codebase**.
5. **Then act**: implement the fix, or push back with technical reasoning.
## Guardrails
- Do not use performative praise or agreement.
- Do not promise fixes before verification.
- If any feedback is unclear, clarify first instead of partially implementing items you do understand.
## Processing Multi-Item Feedback
Apply items in this order:
1. Clarifications first.
2. Blocking/security issues.
3. Simpler items.
4. Complex items.
Test each change as you go.
## When to Push Back
Push back when a suggestion is incorrect for this codebase, breaks existing behavior, or ignores known constraints.
Push back style:
- Keep it technical and specific.
- Reference concrete code/tests/constraints.
- Propose a safer alternative when possible.
## When Feedback Is Correct
Implement the fix and report the concrete change.
Keep acknowledgments factual and concise; let verified code changes demonstrate agreement.
## Bottom Line
Technical correctness comes first: verify, decide, then fix or push back.

View File

@@ -1,61 +0,0 @@
---
name: requesting-code-review
description: Request a reviewer pass after each task or feature and before merge to catch issues early
permalink: opencode-config/skills/requesting-code-review/skill
---
# Requesting Code Review
Request a `reviewer` agent pass before changes move forward or merge.
## Core Workflow
Request review:
- After a completed task in a multi-task implementation
- After finishing a feature slice
- Before opening or merging a PR
Include all required context in the request:
- What was implemented
- Requirements or plan source (for example, `plans/<note-name>` in basic-memory)
- Brief summary of behavior and design choices
- Actual diff context (commit range and/or key changed files)
## How to Run It
1. Gather concrete diff context for the exact review scope:
```bash
BASE_SHA=$(git merge-base HEAD origin/$(git rev-parse --abbrev-ref @{upstream} | cut -d/ -f2 2>/dev/null || echo main))
HEAD_SHA=$(git rev-parse HEAD)
git diff --stat "$BASE_SHA..$HEAD_SHA"
git diff "$BASE_SHA..$HEAD_SHA"
```
2. Dispatch `reviewer` with a focused request using:
- exact implemented scope
- the relevant `plans/` note or requirement text
- a concise summary
- the concrete diff range (`BASE_SHA..HEAD_SHA`) and any key files
Use `reviewer.md` as a request template.
3. Triage feedback before continuing:
- Fix critical issues immediately
- Address important issues before merge
- Track minor issues intentionally
- If feedback appears incorrect, reply with code/test evidence and request clarification
## Red Flags
Never:
- Skip review because a change seems small
- Continue with unresolved critical issues
- Request review without plan/requirement context
- Request review without concrete diff scope
## Related Skills
- `verification-before-completion`
- `git-workflow`

View File

@@ -1,49 +0,0 @@
---
title: reviewer-request-template
type: note
permalink: opencode-config/skills/requesting-code-review/reviewer-template
---
# Reviewer Request Template
Use this when dispatching the `reviewer` agent.
## What Was Implemented
<what-was-implemented>
## Requirements / Plan
- Plan note: `plans/<note-name>`
- Requirements summary:
- <requirement-1>
- <requirement-2>
## Summary
<brief summary of behavior/design choices>
## Diff Context
- Base: <base-sha>
- Head: <head-sha>
- Range: `<base-sha>..<head-sha>`
- Key files:
- <path-1>
- <path-2>
```bash
git diff --stat <base-sha>..<head-sha>
git diff <base-sha>..<head-sha>
```
## Reviewer Output Requested
1. Strengths
2. Issues by severity:
- Critical (must fix)
- Important (should fix before merge)
- Minor (nice to have)
3. Merge readiness verdict with short reasoning
For each issue include file:line, why it matters, and suggested fix.

View File

@@ -1,96 +0,0 @@
---
name: subagent-driven-development
description: Execute a plan by dispatching one coder task at a time with ordered spec and quality gates
permalink: opencode-config/skills/subagent-driven-development/skill
---
# Subagent-Driven Development
Use this skill to execute an existing plan from `plans/*` by delegating **one implementation task at a time** to a fresh `coder`, then running ordered quality gates before moving to the next task.
## Core Workflow Value
1. Dispatch a fresh `coder` for exactly one task.
2. Run **spec-compliance review first**.
3. Run **code-quality review second**.
4. Run `tester` functional validation for user-visible behavior.
5. Only then mark the task done and continue.
Do not run multiple coder implementations in parallel for the same branch/worktree.
## When to Use
Use when you already have a concrete plan note (usually under `plans/`) and want controlled, high-signal execution with clear review loops.
Prefer this over ad-hoc execution when tasks are independent enough to complete sequentially and verify individually.
## Execution Loop (Per Task)
1. **Load task from plan**
- Read `plans/<plan-note>` once.
- Extract the exact task text and acceptance criteria.
- Prepare any architectural/context notes the coder needs.
2. **Dispatch coder with full context (no rediscovery)**
- Paste the **full task text** directly into the coder delegation.
- Paste relevant context (paths, constraints, discovered values, dependencies).
- Do not ask coder to rediscover the plan.
3. **Handle coder status explicitly**
- `DONE`: proceed to spec-compliance review.
- `PARTIAL`: resolve stated gaps, then re-dispatch remaining scope.
- `BLOCKED`: unblock (context/scope/approach) before retrying.
4. **Reviewer pass 1 — spec compliance (required first)**
- `reviewer` checks implementation against task requirements only:
- missing requirements
- extra/unrequested scope
- requirement misinterpretations
- If issues exist, send fixes back to `coder`, then re-run this pass.
5. **Reviewer pass 2 — code quality (only after spec pass)**
- `reviewer` checks maintainability and correctness quality:
- clarity of structure/responsibilities
- consistency with local conventions
- risk hotspots and change quality
- If issues exist, send fixes to `coder`, then re-run this pass.
6. **Tester pass — functional verification**
- `tester` validates behavior through real execution paths per local quality pipeline.
- If tester fails, return to `coder`, then re-run reviewer/tester as needed.
7. **Record and continue**
- Update the relevant `plans/*` checklist/note with concise implementation outcomes.
- Move to the next task only when all gates for the current task pass.
## Dispatch Guidance
When delegating to `coder`, include:
- Task name and goal
- Full task text from the plan
- Exact constraints ("only this feature", target files, forbidden scope)
- Discovered values that must be used verbatim
- Required output format/status expectations
Keep delegations narrow. One coder dispatch should correspond to one task outcome.
## Red Flags
Never:
- skip spec-compliance review
- run code-quality review before spec-compliance passes
- mark a task done with open reviewer/tester findings
- make the coder re-read the entire plan for context already available
- batch multiple independent tasks into one coder implementation dispatch
## Local Integration
- Plan source of truth: basic-memory notes under `plans/`
- Implementation agent: `coder`
- Review agent: `reviewer` (spec pass, then quality pass)
- Functional validation agent: `tester`
- Overall gate order: `coder``reviewer(spec)``reviewer(quality)``tester`
This skill complements the repository's mandatory review/test pipeline by enforcing per-task execution discipline and ordered review loops.

View File

@@ -1,92 +1,36 @@
---
name: systematic-debugging
description: Use when encountering bugs, test failures, or unexpected behavior before proposing fixes
description: Diagnose failures with a hypothesis-first workflow, evidence capture, and escalation rules aligned to planner/builder
permalink: opencode-config/skills/systematic-debugging/skill
---
# Systematic Debugging
## Overview
Use this skill when tests fail, behavior regresses, or the root cause is unclear.
Random fix attempts create churn and often introduce new issues.
## Workflow
**Core principle:** always identify root cause before attempting fixes.
1. Define the failure precisely (expected vs actual, where observed, reproducible command).
2. Capture a baseline with the smallest reliable repro.
3. List 1-3 concrete hypotheses and rank by likelihood.
4. Test one hypothesis at a time with targeted evidence collection.
5. Isolate the minimal root cause before proposing fixes.
6. Verify the fix with focused checks, then relevant regression checks.
## The Iron Law
## Evidence Requirements
```
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
```
- Record failing and passing commands.
- Keep key logs/errors tied to each hypothesis.
- Note why rejected hypotheses were ruled out.
If Phase 1 is incomplete, do not propose or implement fixes.
## Planner/Builder Alignment
## When to Use
- Planner: use findings to shape bounded implementation tasks and verification oracles.
- Builder: if contradictions or hidden dependencies emerge, escalate back to planner.
- After two failed verification attempts, stop, record root cause evidence, and escalate.
Use for any technical issue:
- Test failures
- Unexpected runtime behavior
- Build or CI failures
- Integration breakages
- Performance regressions
## Output
Use this especially when:
- You are under time pressure
- A "quick patch" seems obvious
- Previous fix attempts did not work
- You do not yet understand why the issue occurs
## Four-Phase Process
Complete each phase in order.
### Phase 1: Root-Cause Investigation
1. Read error messages and stack traces fully.
2. Reproduce the issue reliably with exact steps.
3. Check recent changes (code, config, dependency, environment).
4. Gather evidence at component boundaries (inputs, outputs, config propagation).
5. Trace data flow backward to the original trigger.
For deeper tracing techniques, see `root-cause-tracing.md`.
### Phase 2: Pattern Analysis
1. Find similar working code in the same repository.
2. Compare broken and working paths line by line.
3. List all differences, including small ones.
4. Identify required dependencies and assumptions.
### Phase 3: Hypothesis and Minimal Testing
1. State one concrete hypothesis: "X is failing because Y".
2. Make the smallest possible change to test only that hypothesis.
3. Verify result before making any additional changes.
4. If the test fails, form a new hypothesis from new evidence.
### Phase 4: Fix and Verify
1. Create a minimal failing reproduction (automated test when possible).
2. Implement one fix targeting the identified root cause.
3. Verify the issue is resolved and no regressions were introduced.
4. If fix attempts keep failing, stop and reassess design assumptions.
## Red Flags (Stop and Restart at Phase 1)
- "Let me try this quick fix first"
- "Ill batch several changes and see what works"
- "It probably is X"
- Proposing solutions before tracing the data flow
- Continuing repeated fix attempts without new evidence
## Supporting Techniques
Use these companion references while executing this process:
- `root-cause-tracing.md` — trace failures backward through the call chain
- `condition-based-waiting.md` — replace arbitrary sleeps with condition polling
- `defense-in-depth.md` — add layered validation so recurrence is harder
## Related Skills
- `test-driven-development` — build minimal failing tests and iterate safely
- `verification-before-completion` — confirm behavior end-to-end before claiming done
- Root cause statement.
- Fix strategy linked to evidence.
- Verification results proving the issue is resolved and not regressed.

View File

@@ -1,68 +0,0 @@
---
title: condition-based-waiting
type: note
permalink: opencode-config/skills/systematic-debugging/condition-based-waiting
---
# Condition-Based Waiting
## Overview
Arbitrary sleep durations create flaky tests and race conditions.
**Core principle:** wait for the condition that proves readiness, not a guessed delay.
## When to Use
Use this when:
- Tests rely on `sleep` or fixed `setTimeout` delays
- Asynchronous operations complete at variable speeds
- Tests pass locally but fail in CI or under load
Avoid arbitrary waits except when explicitly validating timing behavior (for example, debounce intervals), and document why timing-based waiting is necessary.
## Core Pattern
```ts
// ❌ Timing guess
await new Promise((r) => setTimeout(r, 100));
// ✅ Condition wait
await waitFor(() => getState() === 'ready', 'state ready');
```
## Generic Helper
```ts
async function waitFor<T>(
condition: () => T | false | undefined | null,
description: string,
timeoutMs = 5000,
pollMs = 10
): Promise<T> {
const started = Date.now();
while (true) {
const result = condition();
if (result) return result;
if (Date.now() - started > timeoutMs) {
throw new Error(`Timeout waiting for ${description} after ${timeoutMs}ms`);
}
await new Promise((r) => setTimeout(r, pollMs));
}
}
```
## Practical Guidance
- Keep polling intervals modest (for example, 10ms) to avoid hot loops.
- Always include a timeout and actionable error message.
- Query fresh state inside the loop; do not cache stale values outside it.
## Common Mistakes
- Polling too aggressively (high CPU, little benefit)
- Waiting forever without timeout
- Mixing arbitrary delays and condition checks without rationale

View File

@@ -1,64 +0,0 @@
---
title: defense-in-depth
type: note
permalink: opencode-config/skills/systematic-debugging/defense-in-depth
---
# Defense in Depth
## Overview
A single validation check can be bypassed by alternate paths, refactors, or test setup differences.
**Core principle:** add validation at multiple layers so one missed check does not recreate the same failure.
## Layered Validation Model
### Layer 1: Entry Validation
Reject obviously invalid input at boundaries (CLI/API/public methods).
### Layer 2: Business-Logic Validation
Re-validate assumptions where operations are performed.
### Layer 3: Environment Guards
Block dangerous operations in sensitive contexts (for example, test/runtime safety guards).
### Layer 4: Diagnostic Context
Emit enough structured debug information to support future root-cause analysis.
## Applying the Pattern
1. Trace real data flow from entry to failure.
2. Mark all checkpoints where invalid state could be detected.
3. Add targeted validation at each relevant layer.
4. Verify each layer can catch invalid input independently.
## Example Shape
```ts
function createWorkspace(path: string) {
// Layer 1: entry
if (!path || path.trim() === '') {
throw new Error('path is required');
}
// Layer 2: operation-specific
if (!isPathAllowed(path)) {
throw new Error(`path not allowed: ${path}`);
}
}
async function dangerousOperation(path: string) {
// Layer 3: environment guard
if (process.env.NODE_ENV === 'test' && !isSafeTestPath(path)) {
throw new Error(`refusing unsafe path in test mode: ${path}`);
}
// Layer 4: diagnostic context
console.error('operation context', { path, cwd: process.cwd(), stack: new Error().stack });
}
```
## Key Outcome
Root-cause fixes prevent recurrence at the origin. Layered validation reduces the chance that adjacent paths can reintroduce the same class of bug.

View File

@@ -1,66 +0,0 @@
---
title: root-cause-tracing
type: note
permalink: opencode-config/skills/systematic-debugging/root-cause-tracing
---
# Root-Cause Tracing
## Overview
Many bugs appear deep in a stack trace, but the origin is often earlier in the call chain.
**Core principle:** trace backward to the original trigger, then fix at the source.
## When to Use
Use this when:
- The symptom appears far from where bad input was introduced
- The call chain spans multiple layers or components
- You can see failure but cannot yet explain origin
## Tracing Process
1. **Capture the symptom clearly**
- Exact error text, stack frame, and context.
2. **Find immediate failure point**
- Identify the exact operation that throws or misbehaves.
3. **Walk one frame up**
- Determine who called it and with which values.
4. **Repeat until source**
- Continue tracing callers and values backward until you find where invalid state/data originated.
5. **Fix at source**
- Correct the earliest trigger rather than patching downstream symptoms.
## Instrumentation Tips
When manual tracing is hard, add targeted instrumentation before the risky operation:
```ts
const stack = new Error().stack;
console.error('debug context', {
input,
cwd: process.cwd(),
envMode: process.env.NODE_ENV,
stack,
});
```
Guidelines:
- Log before failure-prone operations, not after.
- Include values that influence behavior.
- Capture stack traces for call-path evidence.
## Common Mistake
**Mistake:** fixing where the error appears because it is visible.
**Better:** trace backward and fix where incorrect state is first introduced.
## Pair with Layered Defenses
After fixing the source, apply layered validation from `defense-in-depth.md` so similar failures are blocked earlier in the future.

View File

@@ -1,77 +1,36 @@
---
name: test-driven-development
description: Enforce test-first development for features and bug fixes — no production
code before a failing test
description: Apply red-green-refactor by default for code changes, with narrowly defined exceptions and explicit alternate verification
permalink: opencode-config/skills/test-driven-development/skill
---
# Test-Driven Development (TDD)
# Test-Driven Development
## When to Use
Use this skill for all code changes unless a narrow exception applies.
Use this skill when implementing behavior changes:
- New features
- Bug fixes
- Refactors that alter behavior
## Default Cycle
If the work introduces or changes production behavior, TDD applies.
1. Red: add or identify a test that fails for the target behavior.
2. Green: implement the minimal code change to make the test pass.
3. Refactor: improve structure while keeping tests green.
4. Re-run focused and relevant regression tests.
## Core Rule
## Narrow Exceptions
```
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
```
Allowed exceptions only:
If production code was written first, delete or revert it and restart from a failing test.
- docs-only changes
- config-only changes
- pure refactors with provably unchanged behavior
- repos without a reliable automated test harness
## Red → Green → Refactor Loop
When using an exception, state:
### 1) RED: Write one failing test
- Write one small test that expresses the next expected behavior.
- Prefer clear test names describing observable behavior.
- Use real behavior paths where practical; mock only when isolation is required.
- why TDD was not practical
- what alternative verification was used
### 2) Verify RED (mandatory)
Run the new test and confirm:
- It fails (not just errors)
- It fails for the expected reason
- It fails because behavior is missing, not because the test is broken
## Role Expectations
If it passes immediately, the test is not proving the new behavior. Fix the test first.
### 3) GREEN: Add minimal production code
- Implement only enough code to make the failing test pass.
- Do not add extra features, abstractions, or speculative options.
### 4) Verify GREEN (mandatory)
Run the test suite scope needed for confidence:
- New test passes
- Related tests still pass
If failures appear, fix production code first unless requirements changed.
### 5) REFACTOR
- Improve names, remove duplication, and simplify structure.
- Keep behavior unchanged.
- Keep tests green throughout.
Repeat for the next behavior.
## Quality Checks Before Completion
- [ ] Each behavior change has a test that failed before implementation
- [ ] New tests failed for the expected reason first
- [ ] Production code was added only after RED was observed
- [ ] Tests now pass cleanly
- [ ] Edge cases for changed behavior are covered
## Practical Guardrails
- "I'll write tests after" is not TDD.
- Manual verification does not replace automated failing-then-passing tests.
- If a test is hard to write, treat it as design feedback and simplify interfaces.
- Keep test intent focused on behavior, not internals.
## Related Reference
For common mistakes around mocks and test design, see [testing-anti-patterns](./testing-anti-patterns.md).
- Planner: specify tasks and verification that preserve red-green-refactor intent.
- Builder/Coder: follow TDD during implementation or explicitly invoke a valid exception.
- Tester/Reviewer: verify that TDD evidence (or justified exception) is present.

View File

@@ -1,83 +0,0 @@
---
title: testing-anti-patterns
type: note
permalink: opencode-config/skills/test-driven-development/testing-anti-patterns
---
# Testing Anti-Patterns
Use this reference when writing/changing tests, introducing mocks, or considering test-only production APIs.
## Core Principle
Test real behavior, not mock behavior.
Mocks are isolation tools, not the subject under test.
## Anti-Pattern 1: Testing mock existence instead of behavior
**Problem:** Assertions only prove a mock rendered or was called, not that business behavior is correct.
**Fix:** Assert observable behavior of the unit/system under test. If possible, avoid mocking the component being validated.
Gate check before assertions on mocked elements:
- Am I validating system behavior or only that a mock exists?
- If only mock existence, rewrite the test.
## Anti-Pattern 2: Adding test-only methods to production code
**Problem:** Production classes gain methods used only by tests (cleanup hooks, debug helpers), polluting real APIs.
**Fix:** Move test-only setup/cleanup into test utilities or fixtures.
Gate check before adding a production method:
- Is this method needed in production behavior?
- Is this resource lifecycle actually owned by this class?
- If not, keep it out of production code.
## Anti-Pattern 3: Mocking without understanding dependencies
**Problem:** High-level mocks remove side effects the test depends on, causing false positives/negatives.
**Fix:** Understand dependency flow first, then mock the lowest-cost external boundary while preserving needed behavior.
Gate check before adding a mock:
1. What side effects does the real method perform?
2. Which side effects does this test rely on?
3. Can I mock a lower-level boundary instead?
If unsure, run against real implementation first, then add minimal mocking.
## Anti-Pattern 4: Incomplete mock structures
**Problem:** Mocks include only fields used immediately, omitting fields consumed downstream.
**Fix:** Mirror complete response/object shapes used in real flows.
Gate check for mocked data:
- Does this mock match the real schema/shape fully enough for downstream consumers?
- If uncertain, include the full documented structure.
## Anti-Pattern 5: Treating tests as a follow-up phase
**Problem:** "Implementation complete, tests later" breaks TDD and reduces confidence.
**Fix:** Keep tests inside the implementation loop:
1. Write failing test
2. Implement minimum code
3. Re-run tests
4. Refactor safely
## Quick Red Flags
- Assertions target `*-mock` markers rather than behavior outcomes
- Methods exist only for tests in production classes
- Mock setup dominates test logic
- You cannot explain why each mock is necessary
- Tests are written only after code "already works"
## Bottom Line
If a test does not fail first for the intended reason, it is not validating the behavior change reliably.
Keep TDD strict: failing test first, then minimal code.

View File

@@ -1,100 +0,0 @@
---
name: tmux-session
description: Manage persistent terminal sessions in tmux for long-running processes, dev servers, and interactive tools — load when a task needs a background process or interactive shell
permalink: opencode-config/skills/tmux-session/skill
---
## When to Use This Skill
Load this skill when a task requires:
- Running a **dev server or watcher** that must stay alive across multiple tool calls (e.g. `npm run dev`, `cargo watch`, `pytest --watch`)
- An **interactive REPL or debugger** that needs to persist state between commands
- Running a process **in the background** while working in the main pane
- **Parallel worktree work** where each feature branch gets its own named window
Do NOT use tmux for one-shot commands that complete and exit — use `bash` directly for those.
## Naming Convention
All opencode-managed sessions and windows use the `oc-` prefix:
| Resource | Name pattern | Example |
|---|---|---|
| Named session | `oc-<project>` | `oc-myapp` |
| Named window | `oc-<feature>` | `oc-auth-refactor` |
| Background process window | `oc-bg-<process>` | `oc-bg-dev-server` |
## Starting a Persistent Session
```bash
# Check if already inside tmux
echo $TMUX
# Start a new named session (detached) for a long-running process
tmux new-session -d -s oc-bg-dev-server "npm run dev"
# Or in a new window within the current session
tmux new-window -n oc-bg-dev-server "npm run dev"
```
## Sending Commands to a Running Session
```bash
# Send a command to a named session
tmux send-keys -t oc-bg-dev-server "npm run build" Enter
# Read the last N lines of output from a pane
tmux capture-pane -t oc-bg-dev-server -p | tail -20
```
## Checking if a Session/Window Exists
```bash
# Check for a named session
tmux has-session -t oc-bg-dev-server 2>/dev/null && echo "running" || echo "not running"
# List all oc- prefixed windows in current session
tmux list-windows -F "#{window_name}" | grep "^oc-"
```
## Worktree + Window Workflow
When working across multiple git worktrees, open each in its own tmux window:
```bash
# Create worktree and open it in a dedicated window
git worktree add .worktrees/auth-refactor -b auth-refactor
tmux new-window -n oc-auth-refactor -c .worktrees/auth-refactor
```
Switch between worktrees by switching windows:
```bash
tmux select-window -t oc-auth-refactor
```
## Cleanup
Always clean up sessions and windows when done:
```bash
# Kill a specific window
tmux kill-window -t oc-bg-dev-server
# Kill a detached session
tmux kill-session -t oc-bg-dev-server
# Kill all oc- prefixed windows in current session
tmux list-windows -F "#{window_name}" | grep "^oc-" | xargs -I{} tmux kill-window -t {}
```
## Checking Process Output
Before assuming a background process is healthy, capture its recent output:
```bash
# Capture last 30 lines of a pane
tmux capture-pane -t oc-bg-dev-server -p -S -30
# Check if process is still running (exit code 0 = alive)
tmux has-session -t oc-bg-dev-server 2>/dev/null
```

View File

@@ -1,47 +1,34 @@
---
name: verification-before-completion
description: Require fresh verification evidence before any completion or success claim
description: Require evidence-backed verification before completion claims or final handoff
permalink: opencode-config/skills/verification-before-completion/skill
---
## When to Load
# Verification Before Completion
Load this skill immediately before claiming work is complete, fixed, or passing.
Use this skill before declaring work done, handing off, or approving readiness.
## Core Rule
## Verification Checklist
```
NO COMPLETION OR SUCCESS CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
```
1. Re-state the promised outcome and scope boundaries.
2. Run the smallest reliable checks that prove requirements are met.
3. Run broader regression checks required by project workflow.
4. Confirm no known failures are being ignored.
5. Report residual risk, if any, explicitly.
If you did not run the relevant verification command for this change, do not claim success.
## Evidence Standard
## Verification Gate
- Include exact commands run.
- Include pass/fail outcomes and key output.
- Tie evidence to each acceptance condition.
Before any completion statement:
## Role Expectations
1. **Identify** the exact command that proves the claim.
2. **Run** the full command now (no cached or earlier output).
3. **Check** exit code and output details (failure count, errors, warnings as relevant).
4. **Report** the result with concrete evidence.
- If verification fails, report failure status and next fix step.
- If verification passes, state success and include proof.
- Builder and tester: no completion claim without concrete verification evidence.
- Reviewer: reject completion claims that are not evidence-backed.
- Coder: point to verification evidence from assigned lane before signaling done.
## Common Proof Examples
## If Verification Fails
- **Tests pass** → fresh test run shows expected suite and zero failures.
- **Lint is clean** → fresh lint run shows zero errors.
- **Build succeeds** → fresh build run exits 0.
- **Bug is fixed** → reproduction scenario now passes after the fix.
- **Requirements are met** → checklist is re-verified against the implemented result.
## Anti-patterns
- "Should pass" / "probably fixed" / "looks good"
- claiming completion from partial checks
- relying on old command output
- trusting status reports without independent verification
## Bottom Line
Run the right command, inspect the output, then make the claim.
- Do not claim partial completion as final.
- Return to debugging or implementation with updated hypotheses.

View File

@@ -1,201 +0,0 @@
---
name: work-decomposition
description: Procedure for decomposing multi-feature requests into independent workstreams
— load when user requests 3+ features or features span independent domains
permalink: opencode-config/skills/work-decomposition/skill
---
## When to Load
Load this skill when any of these conditions are true:
- User requests **3 or more distinct features** in a single message or session.
- Requested features span **independent domains** (e.g., frontend-only + backend API + new service).
- Requested features have **mixed risk profiles** (e.g., UI tweak + encryption + new auth surface).
This skill supplements planning. Load it **before the PLAN phase** and follow the procedure below.
## Decomposition Procedure
### Step 1: Identify Features
List each distinct feature the user requested. A feature is a user-visible capability or behavior change.
Rules:
- If a request contains sub-features that can be independently shipped and tested, count them separately.
- "Add temperature display" and "add optimize button" are two features, even if both touch the same page.
- "Encrypted API key storage" and "wire recommendations to use stored keys" are two features — storage is infrastructure, wiring is application logic.
### Step 2: Assess Independence
For each pair of features, evaluate:
- Do they share modified files?
- Does one depend on the other's output/data?
- Do they touch the same data models or APIs?
- Could one ship to production without the other?
Group features that share hard dependencies into the same **workstream**. Features with no shared dependencies are **independent workstreams**.
### Step 3: Classify Risk Profile
For each workstream, classify its highest-risk feature:
| Risk | Triggers | Quality Pipeline |
|------|----------|-----------------|
| **Low** | Frontend-only, config changes, copy/UI tweaks | Tier 3 (fast) |
| **Medium** | New API endpoints, data model changes, third-party integrations | Tier 2 (standard) |
| **High** | Auth/security changes, encryption, new service surfaces (MCP, webhooks, SSO), data migration, secret handling | Tier 1 (full) + human checkpoint |
### Step 4: Allocate Workstreams → Worktrees
Each independent workstream gets its own:
- **Worktree**: `.worktrees/<workstream-name>`
- **Branch**: `feat/<workstream-name>`
- **PR**: Separate pull request for independent review
- **Quality pipeline**: Independent coder → reviewer → tester cycle
Exception: Two low-risk features that touch the same area (e.g., two UI tweaks in the same component) may share a worktree if they can be implemented and committed sequentially.
### Step 5: Order Workstreams
- If workstreams have dependencies, order them so dependent work starts after its prerequisite is merged or at least reviewed.
- If independent, dispatch in parallel (multiple coders simultaneously).
- Prefer shipping lower-risk workstreams first — they unblock value sooner and reduce in-flight complexity.
### Step 6: Present Decomposition to User
**Before proceeding to implementation**, present the decomposition to the user:
```
Proposed workstreams:
1. [workstream-name] (risk: low/medium/high)
Features: [list]
Worktree: .worktrees/[name]
Branch: feat/[name]
Estimated pipeline: Tier [1/2/3]
2. [workstream-name] (risk: low/medium/high)
...
Execution order: [1] → [2] (or [1] and [2] in parallel)
Human checkpoints: [list any high-risk decisions needing approval]
```
Wait for user approval before proceeding. If the user adjusts grouping, update accordingly.
## Human Checkpoint Triggers
The Lead **MUST** stop and ask the user for explicit approval before dispatching coder work when **ANY** of these conditions are met:
### Mandatory Checkpoints
1. **Security-sensitive design**: Encryption approach, auth model/flow, secret storage mechanism, token management, permission model changes.
2. **Architectural ambiguity**: Multiple valid approaches with materially different tradeoffs that aren't resolvable from codebase context alone (e.g., MCP SDK vs REST endpoints, embedded vs external service, SQL vs NoSQL for new data).
3. **Vision-dependent features**: Features where the user's intended UX, behavior model, or product direction isn't fully specified (e.g., "improve recommendations" — improve how? what inputs? what output format?).
4. **New external dependencies**: Adding a new service, SDK, or infrastructure component not already in the project.
5. **Data model changes with migration impact**: New models or schema changes that affect existing production data.
### Checkpoint Format
When triggering a checkpoint, present:
- The specific design decision that needs input
- 2-3 concrete options with pros/cons/tradeoffs
- Your recommendation and rationale
- What you'll do if the user doesn't respond (safe default)
### What Is NOT a Checkpoint
Do not interrupt the user for:
- Implementation details (naming, file organization, code patterns)
- Choices fully determined by existing codebase conventions
- Decisions already covered by prior user answers or cached guidance in basic-memory project `decisions/` notes
## Coder Dispatch Rules
### One Feature Per Coder — No Exceptions
Each coder invocation must implement **exactly one feature** or a tightly-coupled subset of one feature. This is a hard rule, not a guideline.
### Why This Matters
- Focused prompts produce higher-quality implementations
- Each feature goes through its own review/test cycle independently
- Failures in one feature don't block others
- Commits stay atomic and revertable
### Parallel Dispatch
If features are independent (different files, no shared state), dispatch multiple coder invocations **simultaneously in the same message**. This is faster than sequential single-feature dispatches.
### Coder Prompt Requirements
Each coder dispatch MUST include:
1. **Single feature** description with acceptance criteria
2. **Specific file paths and edit points** from discovery (not vague references)
3. **Discovered values verbatim**: i18n keys, API signatures, component names, existing patterns
4. **Worktree path** for all file operations
5. **Active plan note** for the task (for example: a basic-memory project `plans/<feature>` note)
6. **Quality tier** so coder understands the expected rigor
### Anti-patterns — Never Do These
- ❌ Sending 2+ unrelated features to one coder invocation
- ❌ Saying "implement the phased plan" without specifying which single feature
- ❌ Including features the critic said to defer or drop
- ❌ Embedding unresolved blockers as "constraints" for the coder to figure out
- ❌ Proceeding past a RESOLVE verdict without actually resolving the blockers
## Commit Strategy
Within each workstream/worktree:
- **One commit per feature** after it passes review/test
- Conventional Commit format: `feat: <what and why>`
- If a workstream contains 2 closely-related features, commit them separately (not as one giant diff)
## Decomposition Decision Examples
### Example: 4 features requested (like session ses_3328)
```
User requests: optimize button, temperature display, AI recommendations with key storage, MCP server
Analysis:
- Optimize button: frontend-only, low risk, independent
- Temperature: backend+frontend, medium risk, independent
- AI recommendations + key storage: backend, HIGH risk (encryption, secrets), independent
- MCP server: new service surface, HIGH risk (auth, architecture), independent
Decomposition:
Workstream 1: optimize-and-temp (low/medium risk, Tier 2)
- .worktrees/optimize-and-temp, feat/optimize-and-temp
- Coder A: optimize button → review → test → commit
- Coder B: temperature display → review → test → commit
- PR #1
Workstream 2: ai-recommendations (HIGH risk, Tier 1)
- .worktrees/ai-recommendations, feat/ai-recommendations
- Human checkpoint: encryption approach + key storage design
- Coder C: encrypted key CRUD → security review → test → commit
- Coder D: wire recommendations → review → test → commit
- PR #2
Workstream 3: mcp-server (HIGH risk, Tier 1) — or DEFERRED per critic
- Human checkpoint: MCP SDK vs REST, auth model
- .worktrees/mcp-server, feat/mcp-server
- Coder E: MCP implementation → security review → adversarial test → commit
- PR #3
```
### Example: 2 tightly-coupled features
```
User requests: add dark mode toggle + persist theme preference
Analysis:
- Toggle and persistence are tightly coupled (same state, same UX flow)
- Both touch settings page + theme context
Decomposition:
Single workstream: dark-mode (medium risk, Tier 2)
- One worktree, one branch, one PR
- Coder A: toggle + persistence together (they share state)
- Or split if toggle is pure UI and persistence is API: two coder calls
```

View File

@@ -1,95 +1,35 @@
---
name: writing-plans
description: Create implementation plans detailed enough for an executor with little project context
description: Planner workflow for producing execution-ready approved plans with explicit scope, lanes, and verification oracle
permalink: opencode-config/skills/writing-plans/skill
---
# Writing Plans
## When to Use
Use this skill when converting intent into an execution-ready `plans/<slug>` note.
Use this skill when you have requirements for a non-trivial implementation and need a clear plan before editing code.
## Required Plan Shape
## Core Goal
Every approved plan must include:
Write a comprehensive implementation plan that assumes the executor lacks prior context. The plan must include exact tasks, file paths, expected code shape, and verification commands.
- Objective
- Scope and out-of-scope boundaries
- Constraints and assumptions
- Concrete task list
- Parallelization lanes and dependency notes
- Verification oracle
- Risks and open findings
## Scope Check
## Workflow
If requirements contain multiple independent features/subsystems, split into separate plans so each can be implemented and verified independently.
1. Gather enough evidence to remove guesswork.
2. Decompose work into bounded tasks with clear owners.
3. Define verification per task and for final integration.
4. Check contract alignment with planner -> builder handoff rules.
5. Mark `Status: approved` only when execution can proceed without improvisation.
## Plan Storage (basic-memory)
## Quality Gates
Store plans as basic-memory notes in the repo project under `plans/`.
- Project: `opencode-config` (or current repo project)
- Note path: `plans/<feature-or-workstream-name>`
- Use only basic-memory `plans/` notes for plan storage.
## Plan Header Template
```markdown
# [Feature Name] Implementation Plan
> For implementation: use `subagent-driven-development` when subagents are available; otherwise use `executing-plans`.
**Goal:** [one sentence]
**Architecture:** [2-3 concise sentences]
**Tech Stack:** [languages, frameworks, tools]
```
## Build the Plan in This Order
1. **Map file structure first**
- List files to create/modify/test using exact paths.
- State each file's responsibility.
- Prefer small, focused files and clear interfaces.
2. **Decompose into small actionable tasks**
- Tasks should be independently understandable and testable.
- Use checkbox syntax for steps: `- [ ]`.
- Keep each step concrete (write test, run command, implement minimal code, re-run checks).
3. **Include code shape guidance**
- Show function/class signatures, data flow, and key logic constraints.
- Avoid vague instructions like "add validation" without describing expected behavior.
4. **Include exact verification commands (when known)**
- Provide commands with scope and expected result.
- Example: `pytest tests/path/test_file.py::test_case -v` → expected FAIL before implementation, PASS after.
- If exact commands are unknown, state how to discover them from repo scripts/docs.
## Task Template
````markdown
### Task N: [Name]
**Files:**
- Create: `path/to/new_file.ts`
- Modify: `path/to/existing_file.ts`
- Test: `path/to/test_file.ts`
- [ ] Step 1: Write/extend failing test for [specific behavior]
- [ ] Step 2: Run: `<exact command>` and confirm expected failure
- [ ] Step 3: Implement minimal code for [specific behavior]
- [ ] Step 4: Run: `<exact command>` and confirm pass
- [ ] Step 5: Run broader verification: `<exact command>`
````
## Plan Review Loop (required)
After each plan chunk:
1. Review for completeness, scope alignment, actionable decomposition, and verification quality.
2. Fix identified issues in the same chunk.
3. Re-review until the chunk is implementation-ready.
Suggested chunking: use `## Chunk N: <name>` headings for large plans.
## Completion Handoff
When done, state where the plan is stored in basic-memory (for example `plans/<name>`) and whether execution should proceed via:
- `subagent-driven-development` (preferred when available), or
- `executing-plans` (single-agent execution).
- No ambiguous acceptance criteria.
- No hidden scope expansion.
- Verification is specific and runnable.