Your cart is currently empty!
TechSupport Agent — Step-by-Step Build Guide
Build a KB-powered support agent on Voiceflow. Experience how document structure affects answer quality.
Complete build sequence
Follow these phases in order. Click any step to expand it.
Go to creator.voiceflow.com. On your dashboard, click + New Project. Configure as follows:
| Field | Value |
|---|---|
| Name | TechSupport Agent |
| Type | Webchat |
| Framework | Agentic |
| Objective | Resolution |
Click Start from scratch — do not click Generate project.
Go to Variables in the left sidebar. Click New variable and create:
| Variable name | Type | Default value | Description field |
|---|---|---|---|
| kb_mode | text | none | This variable tells the agent which Knowledge Base path the user chose for this session. |
Go to Knowledge Base in the left sidebar → + Add data source → File. Upload all six documents:
- Hardware_Troubleshooting.docx
- Software_Troubleshooting.docx
- Networking_Troubleshooting.docx
- Performance_Optimization_Troubleshooting.docx
- Data_management_troubleshooting_v2.docx
- Combined_Troubleshooting_Guide_V2.docx
In the Agent tab, find the Global Prompt section. Clear all default placeholder text — the entire section is one input area. Copy and paste the complete block below in one go:
In the left sidebar, find the Playbooks section → click + → Create new playbook.
Enter the Name and LLM Description:
Click the Create Playbook button. This opens the Playbook editor with the Instructions section.
Paste the following into the Instructions field:
Now click Exit Conditions (bottom of the Playbook editor) → click + to add an exit condition. Fill in the fields as follows:
| Field | Value to enter |
|---|---|
| Name | KB mode selected |
| LLM description | This condition is satisfied only when kb_mode has been set to “individual” or “combined” by the user’s explicit choice — not “none” or blank. |
Then click + Add required variable and select kb_mode. In the variable’s LLM description field enter:
When done, click the × in the top right to close the Playbook and return to the project.
In the left sidebar, find the Workflows section → click + → Create new workflow.
Enter the Name and LLM Description:
Click Create Workflow. This opens the canvas. Now build the following blocks in sequence:
Already present on the canvas automatically. No configuration needed.
From the left step menu, select and drag a Playbook step onto the canvas. Then connect Block 1 and Block 2 using an arrow connector — click the small circle on the bottom of Block 1 and drag it to Block 2. In the Playbook step, select KB Mode Selection.
Connect Block 2 to Block 3 using an arrow connector. From the left step menu, select and drag a Message step onto the canvas. In the Message step configuration, select Scripted mode (not Agentic). Paste this text:
Connect Block 3 to Block 4 using an arrow connector. From the left step menu, select and drag a Playbook step onto the canvas. Click Create new Playbook from within the step.
Enter the following Name and LLM Description for this new Playbook:
Paste the following into the Instructions field:
This Playbook has no exit conditions — it ends the session naturally when the user is done. Close the Playbook using the × in the top right.
Connect Block 4 to Block 5 using an arrow connector. From the left step menu, select and drag an End step onto the canvas.
In the left sidebar → Workflows → + → Create new workflow.
Enter the Name and LLM Description:
Click Create Workflow. Build the following blocks in sequence:
Already present on the canvas automatically.
Connect Block 1 to Block 2 using an arrow connector. Select and drag a Playbook step from the left menu. Select KB Mode Selection.
Connect Block 2 to Block 3 using an arrow connector. Select and drag a Message step. Select Scripted mode. Paste this text:
Connect Block 3 to Block 4 using an arrow connector. Select and drag a Playbook step. Click Create new Playbook.
Enter the following Name and LLM Description:
Paste the following into the Instructions field:
This Playbook has no exit conditions — it ends the session naturally. Close the Playbook using the × in the top right.
Connect Block 4 to Block 5 using an arrow connector. Select and drag an End step.
Go to the Agent tab → Instructions section. Replace all default placeholder text with:
Then in the Skills panel on the right side of the Agent tab, click + and attach both Workflows:
- Support — Individual Docs
- Support — Combined Doc
Use the Run button in the top right of the Voiceflow canvas (shortcut: Shift+R) to open the test panel. This resets the session state cleanly each time you use it.
Run the test case from the Test case tab on this widget — once choosing Mode A, once choosing Mode B. Observe the difference in the answers.
When ready to publish:
- Click Publish in the top right
- Go to Deploy → Web chat widget
- Customise the widget name and colours
- Copy the shareable link and share with your test group
Why does document structure matter?
This exercise demonstrates a real KB design decision that affects every support agent built on a knowledge base.
⚠ Mode A — Individual documents without contextual notes
Each document covers one domain only with no cross-references between them. When the agent searches the KB, it retrieves relevant chunks from whichever documents match the query — so a question spanning two domains will return results from both documents regardless of mode.
The LLM will reason across whatever it retrieves. If the documents contain no explicit cross-reference text, any connection the agent makes between categories comes from LLM inference — not from your curated KB content. This means the connection is probabilistic, inconsistent, and cannot be audited or controlled.
The limitation: Cross-category insights depend on LLM improvisation rather than documented guidance. The agent may make the right connection sometimes — but you cannot guarantee it, standardise it, or trace it back to a source.
✓ Mode B — Combined document with contextual notes
The combined guide contains explicit cross-reference notes at the start of each section. For example, the Hardware section says: “Refer to Performance Optimization for tips on improving device performance if hardware issues persist, and Software Troubleshooting for driver-related concerns.”
When the agent retrieves these notes as part of its KB search results, it surfaces them explicitly in its response — attributing the cross-category connection to the document rather than generating it from its own reasoning. The connection is now reliable, consistent, and traceable to a source you control.
The advantage: Cross-category reasoning comes from your curated KB content — not LLM improvisation. The agent’s behaviour is predictable, auditable, and will produce the same cross-reference every time the relevant combination of issues is raised.
Test case — run this on both modes
Use this exact problem statement to compare Mode A and Mode B responses. The difference will make the KB design lesson visible.
Mode A — Individual Docs
The agent will likely address both issues and may even suggest restart as a common first step — because the LLM reasons across whatever it retrieves from the KB. However any cross-category connection it makes is LLM inference, not documented guidance. It will not surface an explicit note explaining why the two problems may be connected at a system level.
Mode B — Combined Doc
The agent will surface the explicit contextual note from the combined guide: that slow device performance and network/VPN issues can be interconnected. This connection comes from your KB content — not LLM improvisation. The agent quotes the source, making the reasoning traceable and consistent across every session.
Try it yourself — TechSupport Agent (live)
Select Mode A or Mode B at the start and ask the test question. Compare the two responses.
The agent will open below. Select Option A or Option B when prompted.
