CAISA · Session 3 & 4 · Build Exercise

TechSupport Agent — Step-by-Step Build Guide

Build a KB-powered support agent on Voiceflow. Experience how document structure affects answer quality.

Build complexity Guided Assistant Fixed paths · Agentic framework · Knowledge Base + Global Prompt + Workflows

Complete build sequence

Follow these phases in order. Click any step to expand it.

0 Create the project Action

Go to creator.voiceflow.com. On your dashboard, click + New Project. Configure as follows:

FieldValue
NameTechSupport Agent
TypeWebchat
FrameworkAgentic
ObjectiveResolution

Click Start from scratch — do not click Generate project.

1 Declare the session variable Config

Go to Variables in the left sidebar. Click New variable and create:

Variable nameTypeDefault valueDescription field
kb_modetextnoneThis variable tells the agent which Knowledge Base path the user chose for this session.
Enter the description text into the variable’s Description field when creating it in Voiceflow. This helps you and your team understand what the variable is for.
2 Upload Knowledge Base documents Action

Go to Knowledge Base in the left sidebar → + Add data source → File. Upload all six documents:

  • Hardware_Troubleshooting.docx
  • Software_Troubleshooting.docx
  • Networking_Troubleshooting.docx
  • Performance_Optimization_Troubleshooting.docx
  • Data_management_troubleshooting_v2.docx
  • Combined_Troubleshooting_Guide_V2.docx
All six documents sit in the same KB. Which ones the agent searches is controlled by the instructions inside each Workflow — not by separate KB settings.
3 Write the Global Prompt Paste

In the Agent tab, find the Global Prompt section. Clear all default placeholder text — the entire section is one input area. Copy and paste the complete block below in one go:

# Personality You are TechSupport Agent — an internal IT helpdesk assistant for end users at a technology organisation. You help users diagnose and resolve issues with hardware, software, networking, performance, and data management. You are calm, clear, and methodical. You give step-by-step guidance that a non-technical user can follow. # Goal Resolve the user’s technical issue using only the information in your Knowledge Base. Do not use any knowledge outside the KB documents provided. If the answer is not in your KB, say so clearly and suggest the user contacts IT support directly. # Tone Patient and professional. Use plain English — avoid jargon where possible. Number your steps so users can follow along. Keep responses focused — one problem at a time. # Guardrails Only answer from the Knowledge Base — never from general knowledge. Do not make up steps, commands, or settings that are not in the KB. If a question spans multiple categories, acknowledge the overlap and address each part. Do not provide advice on topics outside IT troubleshooting.
4 Create the KB Mode Selection Playbook Paste

In the left sidebar, find the Playbooks section → click +Create new playbook.

Always create Playbooks from the Playbooks section in the left sidebar — never from the Agent tab Skills panel. Playbooks created from the Agent tab bypass Workflow exit condition logic.

Enter the Name and LLM Description:

Name
KB Mode Selection
LLM Description
Asks the user which Knowledge Base mode they want to use for this session — individual documents or the combined guide. Sets the kb_mode variable accordingly.

Click the Create Playbook button. This opens the Playbook editor with the Instructions section.

Paste the following into the Instructions field:

# Goal Ask the user which KB mode they want to use and set {kb_mode} accordingly. # Instructions Send this exact message: “Welcome to TechSupport Agent — powered by Benchmark X 360. Before we start, choose how you want me to look up answers: Option A — I will search across five separate troubleshooting documents (Hardware, Software, Networking, Performance, Data Management). Each document covers its own area only. Option B — I will search a single combined guide that includes cross-references between all five areas. Which would you prefer — A or B?” If the user chooses A: set {kb_mode} = “individual” If the user chooses B: set {kb_mode} = “combined” Confirm their choice with one sentence and wait for exit.

Now click Exit Conditions (bottom of the Playbook editor) → click + to add an exit condition. Fill in the fields as follows:

FieldValue to enter
NameKB mode selected
LLM descriptionThis condition is satisfied only when kb_mode has been set to “individual” or “combined” by the user’s explicit choice — not “none” or blank.

Then click + Add required variable and select kb_mode. In the variable’s LLM description field enter:

Must be set to “individual” or “combined” — not “none” or blank. The user must have explicitly chosen their preferred KB mode before this condition triggers.

When done, click the × in the top right to close the Playbook and return to the project.

5 Create Workflow A — Support with Individual Docs Paste

In the left sidebar, find the Workflows section → click +Create new workflow.

Enter the Name and LLM Description:

Name
Support — Individual Docs
LLM Description
Handles tech support queries using the five individual troubleshooting documents. Each document covers one area only with no cross-references. Run this when kb_mode = “individual”.

Click Create Workflow. This opens the canvas. Now build the following blocks in sequence:

Block 1 — Start trigger

Already present on the canvas automatically. No configuration needed.

Block 2 — Playbook step

From the left step menu, select and drag a Playbook step onto the canvas. Then connect Block 1 and Block 2 using an arrow connector — click the small circle on the bottom of Block 1 and drag it to Block 2. In the Playbook step, select KB Mode Selection.

Block 3 — Message step

Connect Block 2 to Block 3 using an arrow connector. From the left step menu, select and drag a Message step onto the canvas. In the Message step configuration, select Scripted mode (not Agentic). Paste this text:

I am now searching the five individual documents. What is your technical issue? Please describe it and I will look it up for you.
Block 4 — Playbook step (Resolve Issue — Individual)

Connect Block 3 to Block 4 using an arrow connector. From the left step menu, select and drag a Playbook step onto the canvas. Click Create new Playbook from within the step.

Enter the following Name and LLM Description for this new Playbook:

Name
Resolve Issue — Individual
LLM Description
Answers the user’s technical question by searching only the five individual troubleshooting documents. Does not cross-reference between categories.

Paste the following into the Instructions field:

# Goal Answer the user’s technical question using only the five individual KB documents. # Instructions Listen to the user’s technical issue. Use the Knowledge Base tool to search for a solution. Search only these documents: – Hardware_Troubleshooting – Software_Troubleshooting – Networking_Troubleshooting – Performance_Optimization_Troubleshooting – Data_management_troubleshooting_v2 Provide a clear, numbered step-by-step answer. If the answer is not found, say: “I could not find a solution in my current documents. Please contact your IT support team directly.” Do NOT reference or suggest that other documents exist. Do NOT draw on general knowledge outside the KB. After answering, ask: “Does this resolve your issue, or do you have another question?” Continue until the user confirms resolution or ends the session.

This Playbook has no exit conditions — it ends the session naturally when the user is done. Close the Playbook using the × in the top right.

Block 5 — End step

Connect Block 4 to Block 5 using an arrow connector. From the left step menu, select and drag an End step onto the canvas.

6 Create Workflow B — Support with Combined Doc Paste

In the left sidebar → Workflows+Create new workflow.

Enter the Name and LLM Description:

Name
Support — Combined Doc
LLM Description
Handles tech support queries using the combined troubleshooting guide which includes contextual notes cross-referencing all five areas. Run this when kb_mode = “combined”.

Click Create Workflow. Build the following blocks in sequence:

Block 1 — Start trigger

Already present on the canvas automatically.

Block 2 — Playbook step

Connect Block 1 to Block 2 using an arrow connector. Select and drag a Playbook step from the left menu. Select KB Mode Selection.

Block 3 — Message step

Connect Block 2 to Block 3 using an arrow connector. Select and drag a Message step. Select Scripted mode. Paste this text:

I am now searching the combined guide with cross-references across all areas. What is your technical issue? Please describe it and I will look it up for you.
Block 4 — Playbook step (Resolve Issue — Combined)

Connect Block 3 to Block 4 using an arrow connector. Select and drag a Playbook step. Click Create new Playbook.

Enter the following Name and LLM Description:

Name
Resolve Issue — Combined
LLM Description
Answers the user’s technical question using the combined troubleshooting guide. Uses contextual notes to identify when a problem spans multiple categories and addresses all relevant areas.

Paste the following into the Instructions field:

# Goal Answer the user’s technical question using the combined troubleshooting guide. Use contextual notes to identify when a problem spans multiple categories. # Instructions Listen to the user’s technical issue. Use the Knowledge Base tool to search for a solution. Search only the Combined_Troubleshooting_Guide_V2 document. Provide a clear, numbered step-by-step answer. If the contextual notes in the document suggest the issue may relate to another category, mention this explicitly and address the related area as well. If the answer is not found, say: “I could not find a solution in my current documents. Please contact your IT support team directly.” After answering, ask: “Does this resolve your issue, or do you have another question?” Continue until the user confirms resolution or ends the session.

This Playbook has no exit conditions — it ends the session naturally. Close the Playbook using the × in the top right.

Block 5 — End step

Connect Block 4 to Block 5 using an arrow connector. Select and drag an End step.

7 Wire the Agent Instructions and attach Workflows Paste

Go to the Agent tabInstructions section. Replace all default placeholder text with:

# Starting Message Do not send any opening message. Do not greet the user. Do not ask any question. The KB Mode Selection Playbook inside the Workflow handles the first message. Any message sent by the Agent before the Workflow runs creates a duplicate greeting. # Skills – Support — Individual Docs: run this workflow immediately when any conversation starts. This workflow handles everything including the KB mode question and the support response. – Support — Combined Doc: run this workflow when kb_mode = “combined” has been confirmed by the user. Never respond directly from the global prompt. Always route through a workflow first.

Then in the Skills panel on the right side of the Agent tab, click + and attach both Workflows:

  • Support — Individual Docs
  • Support — Combined Doc
Do NOT attach Playbooks to the Agent Skills panel. Playbooks should only be called from within Workflows — never directly from the Agent. Attaching a Playbook to the Agent Skills panel causes it to bypass Workflow exit condition logic and run autonomously.
Checklist before moving to testing: Global Prompt filled ✓ · Instructions filled ✓ · Both Workflows attached to Skills panel ✓. All three are required or the agent will not route correctly.
8 Test and publish Action

Use the Run button in the top right of the Voiceflow canvas (shortcut: Shift+R) to open the test panel. This resets the session state cleanly each time you use it.

Run the test case from the Test case tab on this widget — once choosing Mode A, once choosing Mode B. Observe the difference in the answers.

When ready to publish:

  1. Click Publish in the top right
  2. Go to Deploy → Web chat widget
  3. Customise the widget name and colours
  4. Copy the shareable link and share with your test group

Why does document structure matter?

This exercise demonstrates a real KB design decision that affects every support agent built on a knowledge base.

⚠ Mode A — Individual documents without contextual notes

Each document covers one domain only with no cross-references between them. When the agent searches the KB, it retrieves relevant chunks from whichever documents match the query — so a question spanning two domains will return results from both documents regardless of mode.

The LLM will reason across whatever it retrieves. If the documents contain no explicit cross-reference text, any connection the agent makes between categories comes from LLM inference — not from your curated KB content. This means the connection is probabilistic, inconsistent, and cannot be audited or controlled.

The limitation: Cross-category insights depend on LLM improvisation rather than documented guidance. The agent may make the right connection sometimes — but you cannot guarantee it, standardise it, or trace it back to a source.

✓ Mode B — Combined document with contextual notes

The combined guide contains explicit cross-reference notes at the start of each section. For example, the Hardware section says: “Refer to Performance Optimization for tips on improving device performance if hardware issues persist, and Software Troubleshooting for driver-related concerns.”

When the agent retrieves these notes as part of its KB search results, it surfaces them explicitly in its response — attributing the cross-category connection to the document rather than generating it from its own reasoning. The connection is now reliable, consistent, and traceable to a source you control.

The advantage: Cross-category reasoning comes from your curated KB content — not LLM improvisation. The agent’s behaviour is predictable, auditable, and will produce the same cross-reference every time the relevant combination of issues is raised.

The key insight for CAISA: You cannot suppress LLM reasoning with prompt instructions alone. The LLM will always reason across what it retrieves. What you can control is what the retrieval finds. Contextual notes in your KB give the LLM the right text to find and surface — making correct cross-category reasoning reliable rather than probabilistic. A well-designed KB does not just store answers. It guides the agent’s reasoning.
Where this fits on the Complexity Map: This build sits at Guided Assistant level — fixed paths, KB-powered, no integrations yet. Next week you will add ticket creation which moves the agent toward Smart Support level.

Test case — run this on both modes

Use this exact problem statement to compare Mode A and Mode B responses. The difference will make the KB design lesson visible.

Test problem — paste this into the agent
“My laptop is running very slowly and I also cannot connect to the company VPN. Both problems started at the same time this morning.”
Mode A — Individual Docs

The agent will likely address both issues and may even suggest restart as a common first step — because the LLM reasons across whatever it retrieves from the KB. However any cross-category connection it makes is LLM inference, not documented guidance. It will not surface an explicit note explaining why the two problems may be connected at a system level.

Mode B — Combined Doc

The agent will surface the explicit contextual note from the combined guide: that slow device performance and network/VPN issues can be interconnected. This connection comes from your KB content — not LLM improvisation. The agent quotes the source, making the reasoning traceable and consistent across every session.

What to look for: Both modes may address both problems. The difference is where the cross-category connection comes from. Mode B will include a line like: “The guide’s contextual notes indicate that slow device performance and network/VPN issues can be interconnected.” That sentence is retrieved from your document — not generated by the LLM. Mode A cannot produce that line because that text does not exist in the individual documents.
Discussion questions for your batch after the test: Both modes may reach a similar answer — so why does KB design still matter? What happens when the LLM makes the wrong cross-category connection because no document guided it? In a production support agent handling hundreds of issue types, which approach is more reliable, consistent, and auditable?

Try it yourself — TechSupport Agent (live)

Select Mode A or Mode B at the start and ask the test question. Compare the two responses.

Live demo

The agent will open below. Select Option A or Option B when prompted.