This guide covers advanced best practices for creating knowledge agents that are powerful, reliable, and delightful to use. These techniques come from real-world usage and help you avoid common pitfalls.
System instructions should be explicit and structured, not conversational.Don’t:
Copy
Ask AI
You're really good at helping people and should try your bestto be helpful and nice when they ask you things.
Do:
Copy
Ask AI
You are a research assistant. When users ask questions:1. Search your knowledge base first2. If information isn't found, use the Web Research tool3. Cite sources in your responses4. Ask clarifying questions for ambiguous requests
Why: AI models follow explicit instructions better than vague guidance.
Tell the AI exactly when and how to use each tool.Vague (Bad):
Copy
Ask AI
You have access to several tools to help users.
Specific (Good):
Copy
Ask AI
Tools available:- "Company Research" workflow: Use when users ask about specific companies Input: Company name Output: Company data, funding, employee count- "LinkedIn Enrichment" workflow: Use after Company Research to get people Input: Company name from previous research Output: Key decision-makers and their profiles- HubSpot integration: Use to save contacts Input: Person name, email, company Action: Creates contact in CRMWorkflow order: Research → Enrich → Save to CRMAlways get user approval before saving to HubSpot.
Why: Specificity reduces guesswork and increases reliability.
Tell the agent what NOT to do is as important as what to do.Good boundaries:
Copy
Ask AI
Do NOT:- Make up information if you don't know something- Claim certainty when data is uncertain- Send emails without showing user the draft first- Create CRM records without confirming details- Provide financial or legal adviceIf asked to do something outside your capabilities:"I'm specialized in [your domain]. For [their request],I recommend [alternative or human handoff]."
Why: Prevents the agent from hallucinating or overstepping.
Build your system instructions incrementally:Day 1: Basic role
Copy
Ask AI
You are a marketing assistant that helps create campaigns.
Day 2: Add workflow
Copy
Ask AI
You are a marketing assistant. When users want campaigns:1. Ask about goals and audience2. Use "Competitor Research" workflow3. Present findings and recommendations
[Previous instructions]If Competitor Research fails:- Explain the error to the user- Offer to proceed without competitor data- Or suggest trying again later
Why: Gradual refinement based on real usage beats trying to write perfect prompts upfront.
For research requests:1. Use Web Search to find companies2. Use LinkedIn Enrichment to get key people3. Use Company Analysis to assess fit4. Use Google Docs to save findings
Sales Chain:
Copy
Ask AI
Identify → Research → Qualify → Enrich → Save
System instructions:
Copy
Ask AI
For lead generation:1. Use Company Search to find prospects2. Use Web Research to check recent news3. Use Qualification Checklist to assess fit4. Use LinkedIn Enrichment to find contacts5. Use HubSpot integration to create deals
Why: Designed chains create reliable, repeatable workflows.
For sensitive or irreversible actions, add approval steps:Pattern:
Copy
Ask AI
Tool call → Show results → Get approval → Execute action
Implementation:
Copy
Ask AI
System instructions:"After using Company Research and Enrichment workflows,show the user:'I've found [N] prospects:[List with key details]Should I create these contacts in HubSpot?'Only proceed if user explicitly confirms."
Why: Prevents unintended actions and builds user trust.
System instructions:"If a tool fails:1. Explain what happened in plain language2. Offer alternative approaches3. Ask user what they'd preferExample:'The LinkedIn Enrichment tool isn't responding (likely API rate limit).I can:- Continue with the data we have- Use an alternative enrichment source- Try again in a few minutesWhat works best?'"
Why: Resilient agents maintain momentum even when tools fail.
Test each tool separately:- "Use Company Research to research Microsoft"- "Use LinkedIn Enrichment for TechCorp"- "Create a test contact in HubSpot"Verify: Tool is called Returns expected data Agent presents results clearly
Phase 2: Integration testing (Tool combinations)
Copy
Ask AI
Test tool chains:- "Research Company X and add to HubSpot"- "Analyze data and save to Google Sheets"Verify: Tools called in logical order Data flows between tools Final output is complete
Phase 3: User acceptance testing (Real scenarios)
Copy
Ask AI
Test like a real user would:- Ask vague questions- Change mind mid-conversation- Request edge cases- Try to break itVerify: Agent asks clarifying questions Handles ambiguity Recovers from errors Boundaries are respected
Accuracy guidelines:- Only use information from your knowledge base or tool results- If you're not certain, say so explicitly- Never make up data, statistics, or quotes- Use phrases like "Based on my knowledge..." or "I don't have information about..."- When making inferences, clearly mark them as suchIf you don't know: "I don't have that information in my knowledge base.I can try to find it using [tool name] if you'd like."
Why: Explicit anti-hallucination instructions reduce confident but wrong answers.
Don’t overwhelm users with all capabilities at once:Welcome message progression:
Copy
Ask AI
Basic welcome:"Hi! I can help with [primary use case]. What would you like to do?"After 1-2 successful interactions:"By the way, I can also [secondary capability]. Interested?"After they're comfortable:Mention advanced features as relevant
Why: Gradual exposure improves onboarding and reduces cognitive load.
User: "Research Acme Corp"Agent: [Immediately dumps 500 words of research]
Good pacing:
Copy
Ask AI
User: "Research Acme Corp"Agent: "I'll research Acme Corp for you. One moment..."[Calls tool]Agent: "Found it! Acme Corp is a B2B SaaS company ($50M revenue, 200 employees).Would you like the full analysis or specific aspects?"User: "Full analysis"Agent: [Now provides complete details]
Why: Pacing gives users control and prevents information overload.
1. Acknowledge the error2. Explain what happened (simple terms)3. Offer alternatives4. Let user decide next stepExample:"I tried to call the Company Research tool but it returned an error(API rate limit). This means we've made too many requests recently.I can:1. Try a different research tool2. Wait 2 minutes and retry3. Continue without external research using my knowledge baseWhat would you prefer?"
Data privacy:- Never ask users for passwords, credit cards, or SSNs- If users share sensitive information, remind them: "Please don't share sensitive personal information in this chat. Conversations may be logged."- Don't store sensitive data in variables or tool calls
For agents that call expensive or limited APIs:System instructions:
Copy
Ask AI
Resource limits:- Maximum 10 company researches per conversation- After 10 researches: "We've hit the research limit for this conversation. Start a new chat to continue, or let me know if you want to analyze what we've found so far."
Spend 30 minutes weekly reviewing your agent:What to check:
Copy
Ask AI
1. Recent conversations (sample 10-20) - Any errors or confusion? - New use cases emerging? - Tools working properly?2. Knowledge base - Anything outdated? - Missing information users asked about?3. System instructions - Any patterns the agent isn't following? - Need to add new guidance?4. Tools - All integrations still connected? - Workflows completing successfully?
Problem: Agent writes paragraphs when users want quick answersSolution:
Add to system instructions:
Copy
Ask AI
Response length:- Default to concise responses (2-3 sentences)- Only provide detailed explanations if: a) User asks for more detail b) Request requires comprehensive analysis c) Complex topic needs context
Pitfall: Agent doesn't use tools
Problem: You enabled tools but agent just talksSolution:
Check tools are actually enabled (Action Agents tab)
Add explicit tool instructions to system prompt
Test with direct requests: “Use [tool name] to…”
Verify tool names are clear
Pitfall: Knowledge retrieval isn't working
Problem: Agent doesn’t use uploaded knowledgeSolution:
Verify files finished processing
Ask directly: “What do you know about [topic from knowledge]?”
Check knowledge is well-structured with headings
Remove duplicate/conflicting content
Add to system instructions: “Always search knowledge base first”
Pitfall: Inconsistent behavior
Problem: Agent acts differently each timeSolution:
AI is probabilistic by nature (some variation is normal)
Reduce variation by being MORE specific in system instructions
Use examples to show exact format you want
Test the same query 5 times - if wildly different, prompt needs work
Pitfall: Users confused about capabilities
Problem: Users ask for things agent can’t doSolution:
Improve welcome message clarity
Better sample questions showing what agent CAN do
Add to system instructions:
“If asked about [outside scope], say:
‘I specialize in [your domain]. For [their request], try [alternative].”
System instructions:"You are a Level 1 assistant. For simple requests, handle directly.For complex requests involving [specific criteria]:1. Gather initial information2. Explain: 'This is a complex scenario. I recommend consulting [Knowledge Agent/Human] who specializes in [area].'3. Offer to prepare a summary for handoff4. Provide link to [specialized agent] if availableComplex scenarios include:- [Criteria 1]- [Criteria 2]- [Criteria 3]"
System instructions:"After each conversation:1. Note what worked well2. Note what the user asked for that you couldn't provide3. Suggest improvements: 'I noticed you asked about [X]. While I can't help with that now, I've flagged it for my creator to add that capability.'4. Keep a running list of feature requests in the conversation"
System instructions:"You work iteratively with users to create [output].Your process:1. Understand requirements (ask questions)2. Create initial draft (show user)3. Get feedback (what to change)4. Revise (incorporate feedback)5. Repeat until user is satisfied6. Finalize (execute output workflow)Never deliver final output without at least 1 revision cycle.Always show drafts before finalizing."
Use case: Content creation, design work, strategic planning
Remember: Building great knowledge agents is iterative. Start simple, launch quickly, learn from real usage, and continuously improve. The best agents evolve over time based on user feedback and measured outcomes.