02. Plan Mode vs Execute Mode — When to Use Which
Learning Objectives
By the end of this tutorial, you will be able to:
- Explain the internal workings of Plan mode and Execute mode
- Run the same task in both modes and compare the results
- Understand what an agentic loop is and control it with
--max-loops - Enforce a minimum number of tool proposals with
--min-proposals - Customize output filenames with
--artifact-name - Specify search source groups with
--source-set - Select scoped LLM profiles with
--llm-plan/--llm-final - Determine which mode to choose in real-world scenarios
Prerequisites
- Workspace initialization complete (
euleragent init) code-agentagent created:
euleragent new code-agent --template code-assistant
What Are Plan Mode and Execute Mode?
euleragent's execution modes are divided into two based on when human intervention occurs.
Plan Mode (Default)
In Plan mode, the agent proposes what actions to take and stops before actual execution.
[User] ──Task──▶ [LLM] ──Proposal──▶ [Approval Queue]
↓
[Human Reviews]
↓
[Executes After Approval]
This approach: - Allows reviewing dangerous operations (file writes, web searches, shell execution) before they run - Lets you understand the agent's intent and modify parameters - Prevents mistakes before they happen
Execute Mode
In Execute mode, tools not on the require_approval list are executed immediately, while listed tools are queued for approval.
[User] ──Task──▶ [LLM] ──Proposal──▶ [Immediate Execution] (if not require_approval)
└──Proposal──▶ [Approval Queue] (if require_approval)
This approach: - Enables fast processing of trusted, low-risk tasks - Is ideal for read-only operations or already-validated pipelines
Step-by-Step Guide
Step 1: Run the Same Task in Both Modes
Run in Plan Mode
euleragent run code-agent \
--task "fibonacci.py 파일을 만들고 피보나치 수열을 반환하는 Python 함수를 작성해줘" \
--mode plan
Expected output:
Run c3d4e5f6a1b2 started (agent: code-agent, mode: plan)
[loop 1/5] Generating plan...
→ Proposed: file.write (risk: medium)
path: fibonacci.py
content: "def fibonacci(n): ..."
Run c3d4e5f6a1b2 completed (state: PENDING_APPROVAL)
Artifacts:
.euleragent/runs/c3d4e5f6a1b2/artifacts/plan.md
1 approval(s) pending.
Use: euleragent approve list --run-id c3d4e5f6a1b2
The agent proposed file.write, but the file has not been created yet. fibonacci.py does not exist:
ls fibonacci.py # No such file or directory
Run in Execute Mode
This time, run with --mode execute. However, since file.write is in the HIGH_RISK_TOOLS set in policy.py, it still requires approval. Tools not in HIGH_RISK_TOOLS, such as file.read, are executed immediately.
euleragent run code-agent \
--task "fibonacci.py 파일을 만들고 피보나치 수열을 반환하는 Python 함수를 작성해줘" \
--mode execute
Expected output:
Run d4e5f6a1b2c3 started (agent: code-agent, mode: execute)
[loop 1/5] Generating plan...
→ Proposed: file.write (risk: medium) — queued for approval
Run d4e5f6a1b2c3 completed (state: PENDING_APPROVAL)
1 approval(s) pending.
Why does Execute mode still require approval? Because
file.writeis in theHIGH_RISK_TOOLSset inpolicy.py.resolve_tool_permission()returns"require_approval"when a tool is in both theallowlistandHIGH_RISK_TOOLS. Execute mode only auto-executes low-risk tools that don't require approval — it does not auto-execute all tools.
Key Differences Between the Two Modes
| Aspect | Plan Mode | Execute Mode |
|---|---|---|
| Tool proposals | Proposes then stops | Executes immediately (when possible) |
require_approval tools |
Adds to approval queue and stops | Adds to approval queue and continues |
| Artifacts | plan.md (proposal content) |
result.md (execution results) |
| Execution speed | Always waits for approval | Low-risk tools processed immediately |
| Recommended use | Risky operations, initial development | Trusted pipelines, read-only tasks |
Step 2: Agentic Loop and --max-loops
The agentic loop is the process where the LLM iteratively uses tools until the task is complete:
Loop starts
↓
LLM decides next action (tool call or text response)
↓
If tool call → add to approval queue or execute immediately
↓
Add result to context
↓
Task complete? → If not, repeat loop
The default maximum number of loops is 5. You can adjust this for complex or simple tasks:
# Simple task: limit to 1 loop
euleragent run code-agent \
--task "hello_world.py 파일에 Hello, World! 출력 코드를 작성해줘" \
--mode plan \
--max-loops 1
# Complex task: allow 10 loops
euleragent run code-agent \
--task "프로젝트 전체 코드를 분석하고 개선점을 찾아 리포트를 작성해줘" \
--mode plan \
--max-loops 10
When to use --max-loops:
- Decrease: When the agent is attempting too many tool calls or at risk of entering an infinite loop
- Increase: For complex research or analysis tasks that require multiple steps
Step 3: Enforce Batch Collection with --min-proposals
--min-proposals N forces the agent to not terminate until it has proposed at least N tool calls. This is useful when batch information gathering is needed.
euleragent run code-agent \
--task "프로젝트의 Python 파일들을 읽고 각 파일의 주요 기능을 요약해줘" \
--mode plan \
--min-proposals 3 \
--max-loops 5
Expected output:
Run e5f6a1b2c3d4 started (agent: code-agent, mode: plan)
[loop 1/5] Generating plan...
→ Proposed: file.read (path: main.py)
[loop 2/5] Continuing... (proposals so far: 1, min required: 3)
→ Proposed: file.read (path: utils.py)
[loop 3/5] Continuing... (proposals so far: 2, min required: 3)
→ Proposed: file.read (path: config.py)
[loop 4/5] Continuing... (proposals so far: 3, min required: 3)
→ Min proposals reached. Generating summary...
Run e5f6a1b2c3d4 completed (state: PENDING_APPROVAL)
3 approval(s) pending.
Tip:
--min-proposalsis most effective when used with--mode plan. It forces the agent to gather sufficient information, after which the human can review and approve all proposals at once.
Step 4: Specify Output Filename with --artifact-name
By default, the artifact filename is plan.md or result.md depending on the mode. You can customize this with --artifact-name:
euleragent run code-agent \
--task "fibonacci.py 파일을 만들고 피보나치 함수를 구현해줘" \
--mode execute \
--artifact-name fibonacci_report.md
Expected output:
Run f6a1b2c3d4e5 completed (state: RUN_FINALIZED)
Artifacts:
.euleragent/runs/f6a1b2c3d4e5/artifacts/fibonacci_report.md
--artifact-name is useful in the following situations:
- Distinguishing results from multiple runs with meaningful names
- When predictable filenames are needed in automated pipelines
- When collecting artifacts in CI/CD systems
Step 4-1: Specify Search Source Groups with --source-set
--source-set specifies the search source group that the web.search tool will use. By grouping sources defined in mcp.sources in workspace.yaml, you can apply different search strategies per task:
euleragent run code-agent \
--task "최신 Python 패키지 동향을 조사해줘" \
--mode plan \
--source-set research
When --source-set is specified, the internal SearchRouter selects the optimal source from the candidate sources in that group (e.g., tavily, brave, local_kb) and routes the web.search call accordingly. If no source is specified, the default source group is used.
Note:
web.searchis internally routed through the SearchRouter. The user-facing interface remains the same, but behind the scenes, candidate sources defined inmcp.sourcesare evaluated and the optimal source is selected. Source activation may require HITL approval — see 03_hitl_approval.md for details.
Step 4-2: Specify Scoped LLM Profiles with --llm-plan / --llm-final
You can use different LLMs for the planning stage and the final output generation stage of a task:
# Plan with local LLM, generate final output with external LLM
euleragent run code-agent \
--task "Python 유틸리티 설계 및 작성" \
--mode execute \
--llm-plan ollama_local \
--llm-final openai_main
Profiles marked with is_external: true in workspace.yaml require HITL approval:
llm_profiles:
ollama_local:
provider: ollama
base_url: http://localhost:11434
model: qwen3:32b
is_external: false # Local → no approval needed
openai_main:
provider: openai
api_key: ${OPENAI_API_KEY}
model: gpt-4o-mini
base_url: https://api.openai.com/v1
is_external: true # External → approval required
default_llm_profile: ollama_local
Fallback behavior: If the external profile has not yet been approved, it falls back to the local default provider, and the run is not interrupted. A kind: llm_profile_enable approval record is created, and after approval, the external profile takes effect from the next run. For a detailed walkthrough of the complete fallback-to-approval-to-rerun cycle, see 09_scoped_llm_profile.md.
# Check pending approvals
euleragent approve list --tool llm.external_call
# Accept individually
euleragent approve accept <approval-id> --actor "user:you"
# Batch accept
euleragent approve accept-all --actor "user:you" --tool llm.external_call
Step 5: Full Scenario — Code Assistant in Practice
Experience both modes through a scenario similar to real-world work.
Scenario: Writing a Python Utility Script
Step 5-1: Review Draft in Plan Mode
euleragent run code-agent \
--task "data_processor.py 파일을 만들어줘. CSV 파일을 읽어서 각 행의 합계를 계산하고 결과를 output.csv에 저장하는 Python 스크립트야. pandas 사용하고 타입 힌트 포함해줘." \
--mode plan \
--max-loops 2
Expected output:
Run a1b2c3d4e5f6 started (agent: code-agent, mode: plan)
[loop 1/2] Generating plan...
→ Proposed: file.write (risk: medium)
path: data_processor.py
content: [python code with pandas...]
Run a1b2c3d4e5f6 completed (state: PENDING_APPROVAL)
1 approval(s) pending.
Review the proposed code first:
euleragent approve show apv_p1q2r3
{
"id": "apv_p1q2r3",
"tool_name": "file.write",
"params": {
"path": "data_processor.py",
"content": "import pandas as pd\nfrom pathlib import Path\nfrom typing import Optional\n\ndef process_csv(input_path: str, output_path: str = 'output.csv') -> None:\n ..."
},
"risk_level": "medium",
"status": "pending"
}
If the code looks good, approve and execute:
euleragent approve accept apv_p1q2r3 --actor "user:you" --execute
Step 5-2: Follow-up Work in Execute Mode
After the file is created, quickly handle a simple test file creation in execute mode:
euleragent run code-agent \
--task "test_data.csv 샘플 파일을 만들어줘. 이름, 점수1, 점수2 컬럼에 5개 행을 포함해줘." \
--mode execute \
--artifact-name test_setup_report.md
Creating test_data.csv also uses file.write, so approval is still required. However, execute mode does not stop the loop and continues to the next tool call, resulting in faster completion.
When Should You Use Which Mode?
| Situation | Recommended Mode | Reason |
|---|---|---|
| Unfamiliar task, uncertain outcome | Plan | Verify agent intent first |
| Modifying production files | Plan | Difficult to recover from mistakes |
| External service calls (web, email) | Plan | Prevent unintended external transmissions |
| Read-only analysis | Execute | Can be processed immediately |
| Already-validated pipeline | Execute | Speed is the priority |
| Initial development/prototyping | Plan | Learn agent behavior |
| Batch information gathering | Plan + --min-proposals |
Ensure sufficient data collection |
| CI/CD automation | Execute + managed allowlists | Automation requirements |
Hybrid Approach
In practice, combining both modes is the safest approach:
- Research phase:
--mode plan --min-proposals 5— Generate sufficient information-gathering proposals - Review phase:
euleragent approve list→euleragent approve show— Review each proposal - Execution phase:
euleragent approve accept-all --actor "user:you" --execute— Batch execute after approval - Audit phase:
euleragent logs <run-id>— Audit the execution history
Expected Output Summary
# Plan mode — with file write proposal
Run a1b2c3... completed (state: PENDING_APPROVAL)
1 approval(s) pending.
# Execute mode — with immediately executable tools
Run b2c3d4... completed (state: RUN_FINALIZED)
[executed] file.read — OK
[queued] file.write — pending approval
# When max-loops exceeded
Run c3d4e5... completed (state: RUN_FINALIZED)
[warn] Max loops (3) reached. Task may be incomplete.
# When min-proposals satisfied
Run d4e5f6... completed (state: PENDING_APPROVAL)
[info] min-proposals (3) satisfied after loop 3.
3 approval(s) pending.
FAQ / Common Errors
Q: I'm in execute mode, so why is it still requesting approval?
Tools in the HIGH_RISK_TOOLS set in policy.py, such as file.write, web.search, and shell.exec, always require approval even in execute mode. Execute mode immediately executes only non-high-risk tools (e.g., file.read, git.diff). The approval decision is based on the return value of resolve_tool_permission(): "allow" → immediate execution, "require_approval" → approval required, "deny" → blocked.
Q: What happens when --max-loops is exceeded?
The agent terminates without completing the task. Artifacts are still generated but may be incomplete. A warning message is displayed. If more loops are needed, increase --max-loops.
# Check for insufficient loop symptoms
euleragent logs <run-id>
# [warn] Max loops (5) reached. Task may be incomplete.
Q: What happens if --min-proposals is set but the agent doesn't propose enough tools?
The agent terminates when --max-loops is reached. In this case, either increase --max-loops or make the task description more specific. For example, explicitly specifying the number of tool calls like "read and analyze each of the 5 files" is effective.
Q: What is the difference between plan.md and result.md?
plan.md: Result of a Plan mode run. Contains the agent's proposed actions and explanations. Not actual execution results.result.md: Result of an Execute mode run. Contains the actual work performed and its results.- If
--artifact-nameis specified, the output is saved with that name regardless of mode.
Q: What is the default when --mode is not specified?
The default is plan. euleragent follows a safety-first principle and by default requires human review before execution.
Common Mistakes (Out-of-Order Steps)
| Symptom | Cause | Fix |
|---|---|---|
Error: No task provided. |
Only --resume-run specified, --task missing |
Previous run's task is auto-loaded |
Info: --max-loops auto-raised from 2 to 7 |
--min-proposals is greater than --max-loops |
Normal — auto-adjustment notification |
| Plan result is placeholder text | FakeProvider is in use | Change default_llm_profile in workspace.yaml to an actual LLM |
Next step: 03_hitl_approval.md — Dive deep into the HITL approval workflow.