Home > EulerAgent > Tutorials > Pattern > Pattern 06. Explicit Human Review Gate — Nodes That Require...

Pattern 06. Explicit Human Review Gate — Nodes That Require Mandatory Human Verification

Learning Objectives

After completing this tutorial, you will be able to:

Prerequisites

euleragent agent list
euleragent pattern list

1. Why Is a Human Review Gate Necessary?

A Judge node (Tutorial 04) has an LLM evaluate another LLM's output. While this is fast and automated, it falls short in the following situations.

Legal/compliance documents: Medical information, legal advice, and financial guidance must be reviewed by a human.

Final review before external publication: An editor reviews blog posts, reports, and emails before they are actually published.

Security audit results: Vulnerability reports and patch plans must be approved by a security officer.

Client deliverables: Documents delivered under a person's name must be reviewed by that person, even if generated by AI.

A human review gate is a pattern that creates mandatory human intervention points within an automated flow.


2. Core Mechanism: force_tool: file.write + mode: plan

Why file.write?

Here is why the combination of force_tool: file.write + mode: plan serves as a human review gate:

  1. The LLM makes a "proposal" to write a file (including its content)
  2. The system enters a paused state
  3. A person reviews the proposed file content
  4. If the content is not satisfactory, the person can directly edit the file
  5. After approval, the file is actually saved
LLM이 draft 제안
  → approve list (내용 미리보기)
  → 사람이 내용 직접 수정 가능
  → accept-all --execute (파일 저장)
  → resume (다음 노드)

The Role of max_loops: 1

A gate node should execute exactly once. Without max_loops: 1, the gate could fire multiple times in a loop pattern.

runner:
  mode: plan
  force_tool: file.write
  max_loops: 1    # 이 노드는 정확히 한 번만 실행

3. Pattern Design

A pattern where a draft is written, then a person reviews it, and after review the Judge performs final evaluation.

[draft] → [human_review] ──HITL PAUSE──► 사람이 검토/편집
              │ (파일 저장 후)
              │ when: approvals_resolved
              ▼
          [evaluate] → finalize
              │
              └── revise → [evaluate] (루프)

Full flow:

┌─────────────────────────────────────────────────────────────────┐
│ writing.human_review 패턴 흐름도                                  │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  [draft]                                                        │
│     │ 초안 작성 (llm/execute)                                    │
│     │ when: true                                                │
│     ▼                                                           │
│  [human_review]  ◄─── 반드시 HITL PAUSE (file.write)            │
│     │ 초안을 파일로 저장 제안 (llm/plan, max_loops=1)              │
│     │ 사람이 파일 검토/편집 가능                                    │
│     │ when: approvals_resolved                                  │
│     ▼                                                           │
│  [evaluate]  ─────── when: judge.route == finalize ─────────────┐
│     │ 검토된 초안의 품질 평가 (judge/evaluator_v1)                 │
│     │ when: judge.route == revise                               │
│     ▼                                                           │
│  [revise]                                                       │
│     │ 수정 (llm/execute)                                        │
│     └────────────────────────► [evaluate] (최대 2회)             │
│                                                                 │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │ [FINALIZE]  reviewed_document.md 저장                    │◄──┘
│  └──────────────────────────────────────────────────────────┘   │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

4. Writing the YAML

Create the writing_with_human_review.yaml file.

id: writing.human_review
version: 1
category: writing
description: "초안 작성  인간 검토 게이트를 통과하는 문서 작성 패턴"

defaults:
  max_iterations: 2
  max_total_tool_calls: 10
  pass_threshold: 0.85

nodes:
  # ── 노드 1: draft ──
  - id: draft
    kind: llm
    runner:
      mode: execute
      exclude_tools: [web.search, web.fetch, shell.exec]
    prompt:
      system_append: |
        당신은 기술 문서 작가입니다.
        주어진 주제에 대한 완성된 초안을 작성하세요.

        요구사항:
        - 전문적이고 명확한 문체
        - 적절한 섹션 구분
        - 실용적 예시 포함
        - 마크다운 포맷

        이 초안은 사람 편집자의 검토를 받을 것입니다.
        완성도 있게 작성하되, 사람이 편집할 여지를 남겨두세요.
    artifacts:
      primary: draft.md

  # ── 노드 2: human_review (인간 검토 게이트) ──
  - id: human_review
    kind: llm
    runner:
      # mode: plan + force_tool: file.write = 인간 검토 게이트의 핵심
      mode: plan
      force_tool: file.write

      # max_loops: 1 — 이 노드는 정확히 한 번만 실행됨
      # 루프가 있는 패턴에서 게이트가 반복되는 것을 방지
      max_loops: 1

    prompt:
      system_append: |
        당신은 문서 관리 보조자입니다.
        작성된 초안(draft.md)을 검토용 파일로 저장하는 제안을 하세요.

        파일 저장 시:
        - 경로: review/document_for_review.md
        - 파일 상단에 다음 헤더를 추가하세요:
          <!-- 검토 요청: [날짜]
               검토자: [담당자]
               검토 사항: 내용의 정확성, 톤, 불필요한 섹션 확인
          -->
        - 이후 초안 내용을 그대로 포함

        이 파일은 사람이 직접 편집할 것입니다.

    # 파일 쓰기 예산 (보통 1개면 충분)
    guardrails:
      tool_call_budget:
        file.write: 1

    artifacts:
      primary: review/document_for_review.md

  # ── 노드 3: evaluate (Judge) ──
  - id: evaluate
    kind: judge
    judge:
      schema: evaluator_v1
      route_values: [finalize, revise]
    prompt:
      system_append: |
        사람이 검토/편집한 문서(review/document_for_review.md)를 평가하세요.

        주의: 이 문서는 사람 편집자가 이미 검토했습니다.
        기술적 내용보다 최종 발행 준비 상태에 집중하세요:

        - 구조적 완결성 (30%): 모든 섹션이 논리적으로 연결되는가?
        - 언어 품질 (30%): 문체가 일관되고 명확한가?
        - 발행 준비도 (40%): 추가 수정 없이 바로 발행 가능한가?

        score >= 0.85 → finalize
        score < 0.85 → revise (구체적인 개선점 명시)

  # ── 노드 4: revise ──
  - id: revise
    kind: llm
    runner:
      mode: execute
      exclude_tools: [web.search, web.fetch, shell.exec]
    prompt:
      system_append: |
        편집장의 피드백을 반영하여 문서를 개선하세요.
        사람이 편집한 내용(review/document_for_review.md)을 기반으로 합니다.
        수정된 전체 문서를 다시 작성하세요.
    artifacts:
      primary: review/document_for_review.md

edges:
  - from: draft
    to: human_review
    when: "true"

  # human_review 노드는 HITL pause 후 승인이 완료됐을 때만 다음으로 이동
  - from: human_review
    to: evaluate
    when: "approvals_resolved"

  - from: evaluate
    to: finalize
    when: "judge.route == finalize"

  - from: evaluate
    to: revise
    when: "judge.route == revise"

  - from: revise
    to: evaluate
    when: "true"

finalize:
  artifact: review/document_for_review.md

5. Validation

euleragent pattern validate writing_with_human_review.yaml

Expected output:

Validating pattern: writing_with_human_review.yaml

  Stage 1 (Schema)      PASS
  Stage 2 (Structural)  PASS
  Stage 3 (IR Analysis) PASS
    HITL gates: human_review (file.write, max_loops=1) ✓
    Note: human_review gate will pause exactly once per run ✓
    Cycle bounded: max_iterations=2 ✓

Validation complete: 0 errors, 0 warnings

6. Step-by-Step Execution

Step 1: Start Execution

cp writing_with_human_review.yaml .euleragent/patterns/

euleragent pattern run writing.human_review my-agent \
  --task "사내 AI 도구 사용 가이드라인 문서 작성 — 허용 사용 사례, 금지 사항, 데이터 보안 주의사항 포함" \
  --project default

Expected output:

[run:i9e5d3c7] Starting pattern: writing.human_review

  ✓ draft         Completed (14s) — draft.md generated (1,456 words)
  ⏸ human_review  PAUSED — Waiting for HITL approval (file.write × 1)

Approval required:
  euleragent approve list --run-id i9e5d3c7

Step 2: Preview File Content

euleragent approve list --run-id i9e5d3c7

Expected output:

Pending Approvals for run: i9e5d3c7
─────────────────────────────────────────────────
Node: human_review (max_loops=1, human gate)

  #1  file.write  path=review/document_for_review.md
      Content preview (first 20 lines):
      ┌─────────────────────────────────────────────┐
      │ <!-- 검토 요청: 2026-02-23                   │
      │      검토자: [담당자]                         │
      │      검토 사항: 내용의 정확성, 톤, 불필요한 섹션 │
      │ -->                                         │
      │                                             │
      │ # 사내 AI 도구 사용 가이드라인                  │
      │                                             │
      │ ## 1. 목적                                   │
      │ 이 가이드라인은 조직 내 AI 도구의 책임있는       │
      │ 사용을 위해 작성되었습니다...                    │
      └─────────────────────────────────────────────┘
      [Full content: 1,456 words, 89 lines]

  ⚠️  HUMAN GATE: This node requires human review before proceeding.
       You may edit the file content before approving.

Step 3: Directly Edit Content (Optional)

If you want to modify the content after seeing the preview, you can directly edit the file.

# 파일 내용을 먼저 추출해서 편집
euleragent approve show --run-id i9e5d3c7 --item 1 > /tmp/review_draft.md

# 선호하는 에디터로 수정
vim /tmp/review_draft.md
# 또는
code /tmp/review_draft.md

# 수정된 내용으로 대체 후 승인
euleragent approve accept --run-id i9e5d3c7 --item 1 \
  --content-file /tmp/review_draft.md \
  --execute

To approve without any modifications:

euleragent approve accept-all --run-id i9e5d3c7 --execute

Expected output:

Accepted item #1 (file.write) for run: i9e5d3c7
Writing file: review/document_for_review.md ... OK (1,456 bytes)
Human gate passed. Resuming pattern...

Step 4: Resume the Pattern

euleragent pattern resume i9e5d3c7 --execute

Expected output:

[run:i9e5d3c7] Resuming from: human_review → evaluate

  ✓ human_review  Completed — review/document_for_review.md saved
  ✓ evaluate      Completed — score: 0.93 → route: finalize
  ✓ finalize      Completed

Artifact: .euleragent/runs/i9e5d3c7/artifacts/review/document_for_review.md

7. Verifying the Saved File

# 사람이 검토/편집한 최종 문서
cat .euleragent/runs/i9e5d3c7/artifacts/review/document_for_review.md

# 원본 AI 초안 (비교용)
cat .euleragent/runs/i9e5d3c7/artifacts/draft.md

# 승인 기록
cat .euleragent/runs/i9e5d3c7/approvals.jsonl

Example approval record:

{"ts":"2026-02-23T14:40:22Z","run_id":"i9e5d3c7","node":"human_review","action":"accept","item":1,"tool":"file.write","path":"review/document_for_review.md","content_modified":true,"operator":"sean","note":"금지사항 섹션에 내용 추가"}

8. Differences Between Judge Evaluation and Human Review

Criterion Judge (Automated) Human Gate (Manual)
Speed Fast (seconds) Slow (minutes to hours)
Consistency High (always the same criteria) Low (varies by person)
Creative judgment Low High
Regulatory compliance assurance Low High
Accountability AI Human
Editing capability None Yes

Use a Judge when: Quality criteria are well-defined, speed matters, and there are many repeated runs.

Use a Human Gate when: There is legal/ethical responsibility, creative judgment is required, or content will be published externally.


9. Key Concepts Explained

What If human_review Is Inside a Loop?

If there were a revise → human_review → evaluate loop, without max_loops: 1 the human would need to review at every iteration. While this could be intentional, it is inefficient in most cases.

Setting max_loops: 1 ensures this node executes only once for the entire run. On the second loop iteration, this node is skipped.

approvals_resolved vs true

If you use when: "true" on the edge after the human_review node:

Always use when: "approvals_resolved".

Order of File Editing and Approval

Approval flow: 1. euleragent approve list -- Review the proposed content 2. (Optional) Edit the file content (--content-file option) 3. euleragent approve accept-all --execute -- Save the actual file and continue

If you approve without editing, the content proposed by the LLM is saved as-is.


10. Exercise: Security Audit Pattern

Create a pattern that writes a security vulnerability report and requires mandatory review by a security officer.

Requirements

Hint: YAML Structure

id: security.audit_review
version: 1
category: ops
description: "취약점 보고서 작성 + 보안 담당자 필수 검토 패턴"

defaults:
  max_iterations: 1
  max_total_tool_calls: 20

nodes:
  - id: scan
    kind: llm
    runner:
      mode: execute
      # 파일 읽기만 허용, 외부 연결 금지
      exclude_tools: [web.search, web.fetch, shell.exec]
    # ...

  - id: draft_report
    kind: llm
    # ...

  - id: security_review
    kind: llm
    runner:
      mode: plan
      force_tool: file.write
      max_loops: 1        # 반드시!
    # ...

  - id: severity_judge
    kind: judge
    judge:
      schema: evaluator_v1
      route_values: [critical, high_medium, low]
    # ...

  - id: escalate
    kind: llm
    # ...

edges:
  - from: security_review
    to: severity_judge
    when: "approvals_resolved"    # 반드시!

  - from: severity_judge
    to: escalate
    when: "judge.route == critical"

  - from: severity_judge
    to: finalize
    when: "judge.route == high_medium"

  - from: severity_judge
    to: finalize
    when: "judge.route == low"
  # ...

finalize:
  artifact: security_report.md

Validation:

euleragent pattern validate security_audit.yaml

Verify that no JUDGE_ROUTE_COVERAGE_ERROR is raised, and check that all route_values have corresponding edges.


Next Steps

You have implemented the human review gate pattern. Now we move on to more complex multi-route routing with the Judge.

← Prev Back to List Next →