Airgentic Help
This module provides a deeper understanding of how agents and prompts work in Airgentic. Read this when you need to create specialist agents, customise agent behaviour, or understand how the AI decides what to say.
When a user asks a question:
This routing happens automatically based on the role descriptions you write.
The Frontline is a special built-in agent:
Most questions can be handled by the Frontline alone. Only create specialists when you need genuinely different behaviour.
Specialist agents handle specific question types:
| Example specialist | What it handles |
|---|---|
| Technical Support | Product troubleshooting, error messages, how-to questions |
| Sales | Pricing, quotes, product comparisons |
| Store Finder | Location-based searches, opening hours, directions |
| HR Policy | Staff questions about leave, benefits, procedures |
Each specialist has:
- Role description — Tells the Frontline when to delegate to this agent
- Prompt — Instructions for how this agent behaves
- Functions — Actions this agent can perform
- Search scope — Which content this agent can access
| Scenario | Why a specialist helps |
|---|---|
| Different question types need different tones | Sales agent is promotional; support agent is calm and helpful |
| Different question types need different functions | Only product support can look up warranty; only sales can check inventory |
| You want to restrict content access | HR agent only sees HR policies, not public website content |
| You need distinct behaviour for distinct domains | Technical accuracy matters more for support; friendliness matters more for sales |
| Scenario | Why Frontline is sufficient |
|---|---|
| All questions can be handled similarly | One tone and approach works for everything |
| You just need different answers, not different behaviour | Use curated answers instead |
| The specialisation is minor | Prompt components can handle variations without separate agents |
Prompts are the instructions that tell an agent:
- Who it is (identity)
- How it should communicate (tone)
- What it should and shouldn't do (guardrails)
- How to structure responses (output format)
- How to use information (retrieval behaviour)
The agent follows these instructions when generating responses.
Airgentic uses modular prompts for maintainability:
Agent Prompts — The main prompt file assigned to an agent. Contains agent-specific instructions and references to shared components.
Prompt Components — Reusable building blocks that can be included in multiple agent prompts:
| Component | Typical contents |
|---|---|
| Identity | Organisation name, agent persona, what the service does |
| Tone | Communication style (formal, friendly, concise, etc.) |
| Guardrails | What the agent should never do, topics to avoid |
| Output Format | How to structure responses, citation style, length guidance |
Syntax for inclusion:
{{prompt:Component Name}}
When the agent runs, the component content is substituted inline.
Too vague:
Be helpful and answer questions well.
Better:
When answering questions about our products, include the product name, key specifications, and a link to the product page. If you don't have enough information to fully answer, say so clearly and suggest the user contact support.
Organise prompts into clear sections:
## Identity
You are the customer support assistant for Acme Corp...
## Tone
Communicate in a friendly but professional manner...
## Guardrails
Never provide medical advice...
Never share internal processes...
## Response Format
Keep responses under 200 words unless detailed instructions are needed...
Tell the agent what not to do:
- Topics to decline or redirect
- Information it shouldn't share
- Actions it shouldn't take
Examples clarify expected behaviour:
When users ask about pricing:
- If asking about a specific product, provide the listed price and link
- If asking for a quote, explain they should contact sales
- Example: "Our Widget Pro is $99. For custom pricing on bulk orders, please contact sales@example.com."
| Status | Behaviour |
|---|---|
| Off | Agent is disabled, never used |
| Testing | Agent works in Admin Chat only; invisible to end users |
| Live | Agent is available to all users |
Best practice: Keep new agents in Testing until thoroughly verified, then move to Live.
The role description is critical for routing. Write it so the Frontline can correctly decide when to delegate:
Too vague:
Handles customer questions.
Better:
Handles technical questions about product installation, troubleshooting error codes, and resolving hardware malfunctions. Does not handle pricing, returns, or account questions.
Restrict an agent's search results using URL prefixes:
https://www.example.com/products/
https://www.example.com/support/
This agent will only see content from those URL paths.
Use search scope when:
- An agent should focus on specific content areas
- You want to prevent an agent from accessing certain information
- Different agents need access to different knowledge bases
Admin Chat is essential for testing agents:
The Trace Log shows exactly what happened:
| Section | What to check |
|---|---|
| Agent Delegation | Did the Frontline delegate to the right agent? |
| Expert Agent | Is the correct prompt being used? |
| Get Relevant Information | Are the right search results being found? |
Common issues to look for:
- Wrong agent handling the question → Adjust role descriptions
- Missing search results → Check search scope or content indexing
- Unexpected responses → Review the prompt instructions
Test with varied questions:
- Clear cases that should obviously go to your specialist
- Edge cases that could go either way
- Cases that should stay with Frontline
- Questions with no good answer in the knowledge base
If two agents have similar role descriptions, routing becomes unpredictable.
Fix: Make role descriptions mutually exclusive or prioritise one over the other.
Over-constraining the agent can make it unhelpful.
Fix: Balance guardrails with flexibility. The agent should be safe but still useful.
Under-constrained agents may say things you don't want.
Fix: Add explicit boundaries for sensitive topics.
The happy path works, but unusual questions cause problems.
Fix: Test with adversarial and edge-case questions before going live.
Every prompt save creates a version:
- Timestamp and author recorded
- Commit message describes the change
- View and restore any previous version
Access: Edit Prompts → Select prompt → View History
If a change causes problems:
1. Open the prompt editor
2. Click View History
3. Select the previous working version
4. Click Restore This Version
5. Review and save
Back to: Optional Deep-Dives