Skip to content
Stagent
jie-xu/software-debugpublic

No description.

0
by jie xuupdated Apr 16, 20264 stages0 runs

Run in Claude Code

/stagent:start --flow=cloud://jie-xu/software-debug <task_description>

Paste in Claude Code and replace <task_description>

Template blueprint

State machine

Loading state machine…

Click any stage above to view its instructions below.

Stageplanning

planning.md

inline· interruptible · transitions: approved → debugging

Stage: planning

Runtime config (canonical): workflow.jsonstages.planning

Purpose: Reproduce the bug, understand its root cause, and produce an agreed fix plan — including acceptance criteria and test strategy. Output artifact: <project>/.dev-workflow/<session_id>/planning-report.md Valid results this stage writes: pending (plan drafted, awaiting user approval), approved (user has explicitly confirmed)

<HARD-GATE> Do NOT transition out of this stage until the user explicitly confirms the plan. Write `result: approved` only after they have said so. </HARD-GATE>

This is an interruptible stage — the stop hook allows natural pauses for Q&A.

Note: the workflow (setup-workflow.sh) has already run. By the time you read this file, state.md already exists with status: planning and epoch is set.

Explore context

Understand the project: language, tech stack, test framework, existing test coverage, and the specific area of code affected by the bug.

Gather bug information

Ask the user clarifying questions (one at a time, prefer A/B/C options, cap at 5 questions):

  • What is the bug? (symptom, expected vs actual behavior)
  • How to reproduce it? (steps, environment, inputs)
  • Is there an error message, stack trace, or log output?
  • Has this ever worked? Was it introduced by a recent change?
  • What is the priority / severity?

Stop asking once you can draft a fix plan.

Reproduce the bug

Attempt to reproduce the bug in the codebase:

  • Search for relevant code, error messages, or stack trace symbols
  • Identify the likely faulty component or file(s)
  • Note whether there are existing tests that should have caught this

Propose fix approaches

2–3 options with trade-offs; lead with your recommendation. Examples:

  • Targeted patch (surgical fix, minimal risk)
  • Refactor of the affected component (higher confidence, more risk)
  • Workaround / defensive guard (fast, but deferred root cause)

Present the plan

Write the output artifact once you and the user agree on the approach:

markdown
---
epoch: <epoch>
result: pending
---
# Planning Report: <Bug Title>

## Bug Summary
<Symptom, expected behavior, actual behavior, reproduction steps>

## Root Cause Hypothesis
<Most likely cause based on code exploration>

## Fix Strategy
<Chosen approach and rationale>

## Files to Change
- `<file>` — <what will change and why>

## Acceptance Criteria
- [ ] Bug is no longer reproducible
- [ ] Existing tests pass
- [ ] <any additional criteria>

## Testing Strategy

### Quick Tests
- Framework: <e.g. jest / pytest / go test — or "none">
- Key cases:
  - [ ] Regression test covering the exact bug scenario
  - [ ] Edge cases around the fix

### E2E / Journey Tests
- Framework: <e.g. playwright / cypress / none>
- Key user paths:
  - [ ] <path that exercises the buggy behavior end-to-end>

result: pending signals "plan written but not approved yet."

Get user approval

"Plan saved. Please review and confirm to start debugging, or request changes."

If the user requests changes, iterate on the plan — keep result: pending.

Finalize

Once the user explicitly approves, edit the output artifact: change result: pendingresult: approved.

That is the only action needed here. The SKILL.md main loop's step (e) reads the artifact's result: and calls update-status.sh to advance the state machine — do NOT call it yourself.

workflow.json· raw config

workflow.json

drives the state machine above