
This is Part 1 of our 10-part "AI Adoption Journey" series, where we share what we have genuinely built, deployed, and learned about AI agents in our own operations.
Every software company knows the pain. A support ticket arrives: "The report isn't showing data for the last three days." Simple enough on the surface. But to investigate it, someone needs to check the application logs, query the database, review recent code changes, verify API endpoints, check scheduled jobs, and cross-reference configuration files. Two hours later, you've found the root cause -- a cron job silently failed after a dependency update.
We lived this reality running our own SaaS products. According to industry data, the average IT incident takes 4-6 hours to resolve through manual investigation, and Gartner estimates enterprise downtime costs USD $5,600 per minute. For a small Australian software company, that translates to burning through your most expensive resource -- senior developer time -- on detective work instead of building product.
So we built something about it. Not a chatbot. Not a ticketing system with better routing. An actual AI agent that investigates support tickets the way a senior engineer would -- except it does it in minutes instead of hours.
The Core Problem: According to Master of Code (2026), 51% of organisations now have AI agents running in production, yet most are limited to classification and routing. Genuine investigation -- the kind that requires reading logs, querying databases, and connecting evidence -- remains overwhelmingly manual.
We deployed an internal AI agent we call the "Solve8 BMS Agent" for our own SaaS support operations. It lives inside Microsoft Teams -- the tool our team already uses every day -- and it is powered by Claude AI as the reasoning backbone.
Here is what happens when a support ticket comes in:
The agent does not just classify the ticket or suggest a knowledge base article. It actively investigates. When someone reports "data not loading," the agent:
The critical design decision: human-in-the-loop. The agent recommends actions, but a human engineer reviews and approves before any fix is applied. This is not optional for us -- it is a foundational architectural principle. According to KPMG's 2026 AI Pulse report, 75% of enterprise leaders cite security, compliance, and auditability as the most critical requirements for AI agent deployment.
One of the biggest lessons we learned early: if the AI does not live where your team already works, nobody uses it.
We deliberately built the agent as a Microsoft Teams bot rather than a standalone dashboard or separate application. The reasoning is straightforward:
This aligns with the broader industry shift. Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025. The pattern is clear: AI that integrates into existing workflows wins over AI that requires new workflows.
We have been running this agent in production for our own support operations. Here is what the numbers look like.
| Metric | Manual Investigation | With AI Agent | Improvement |
|---|---|---|---|
| Average investigation time | 2-4 hours | 8-15 minutes | 90%+ |
| Evidence completeness | Depends on engineer | Systematic every time | Consistent |
| Root cause identification | Often requires escalation | First-pass accuracy ~85% | Fewer escalations |
| Investigation documentation | Varies (often minimal) | Full structured report | 100% documented |
| Knowledge retention | In engineer's head | Stored and searchable | Institutional memory |
| After-hours capability | Wait until morning | Immediate investigation | 24/7 coverage |
The time reduction is the headline number, but it is not the most valuable outcome. The real gains are:
Consistency. A senior engineer having a good day might investigate thoroughly. A junior engineer on a Friday afternoon might miss things. The AI agent follows the same rigorous process every single time.
Documentation. Every investigation produces a structured report with evidence. No more "I fixed it but I don't remember what I did." This matters enormously for compliance and knowledge transfer.
Learning loop. The agent stores investigation history. When a similar issue appears, it draws on prior investigations to resolve faster. This compounds over time -- the agent gets more valuable the longer it runs.
Let us put real numbers on this. These calculations are based on our own experience combined with industry benchmarks.
These numbers scale with team size and ticket volume. For a mid-market software company processing 30-50 tickets per week, the savings multiply accordingly. Industry data from Master of Code (2026) indicates companies report average returns of 171% on AI agent investments, with 74% achieving ROI within the first year.
Building an AI agent that actually works in production taught us things that no vendor whitepaper will tell you. Here are the five most important lessons.
Our first instinct was to build an agent that fixes issues automatically. That was wrong. The investigation step -- understanding what happened and why -- is where 80% of the time goes. Resolution is usually straightforward once you know the root cause.
By focusing on investigation first, we built trust with the engineering team. They could see the agent's reasoning, verify its evidence, and learn from its approach. Jumping straight to automated resolution would have created anxiety and resistance.
We experimented with a form-based interface initially. Fill in the ticket number, select the system, choose a severity. Nobody used it. The moment we switched to a natural language conversational interface in Teams, adoption went from grudging to enthusiastic.
Engineers want to type "Why is the reporting dashboard showing stale data since Tuesday?" not fill in a form. The AI should meet humans where they communicate naturally.
Some vendors position full automation as the goal. We disagree strongly. For support investigations, the human review step:
As multi-agent AI systems become more capable, the human-in-the-loop principle becomes more important, not less.
Every investigation the agent performs gets stored with full context: the symptoms, the evidence gathered, the root cause found, and the resolution applied. After a few months of operation, this becomes an extraordinarily valuable knowledge base.
New team members can search past investigations. The agent itself draws on this history to resolve recurring issues faster. And when you need to demonstrate compliance or audit your support processes, every step is documented.
We did not try to build an agent that handles everything. We started with one specific class of support tickets -- data integrity issues in our reporting system. Once that worked reliably, we expanded to API errors, then authentication issues, then performance problems.
This incremental approach is consistent with what we have seen across enterprise projects. The build vs buy decision should always start with a narrow, high-value use case before expanding scope.
Not every organisation needs a custom AI investigation agent. Here is a decision framework.
The sweet spot is organisations with complex technical products, meaningful ticket volume, and the need for consistent, documented investigation processes.
If you are ready to build something similar, here is the implementation roadmap based on our experience.
Choose your AI backbone carefully. We use Claude AI for its strong reasoning capabilities and ability to maintain context across complex, multi-step investigations. The model needs to handle long context windows (investigating an issue might require reading hundreds of log lines) and produce structured, evidence-based outputs.
Connect to real data sources. The agent is only as good as the systems it can access. At minimum, it needs read access to your logging infrastructure, database (read-only queries), and version control system. Each additional data source makes the agent more capable.
Design the output format. A wall of text is not useful. We structured our investigation reports with clear sections: Summary, Evidence Found, Probable Root Cause, Recommended Resolution, and Confidence Level. This consistency makes it easy for engineers to review quickly.
Plan for the learning loop. Store every investigation. Tag outcomes (correct, partially correct, needed adjustment). Use this data to improve the agent's investigation patterns over time.
This AI investigation agent was not an isolated project. It was the first step in a broader strategy of deploying AI agents across our business operations. Follow the full 10-part series:
| Part | Topic | Status |
|---|---|---|
| 1 | IT Support Agent: Real Deployment Story (this post) | You are here |
| 2 | The 7 Business Functions AI Agents Are Transforming in 2026 | Published |
| 3 | The AI Bookkeeper: Xero Reconciliation Agent | Published |
| 4 | The AI HR Agent: Policy, Leave, and Onboarding | Published |
| 5 | The AI Email Agent: Brand Voice Replies | Published |
| 6 | Building a Client-Facing Knowledge GPT | Published |
| 7 | AI Phone Receptionist + AI Agent | Published |
| 8 | The BI Agent: Plain English Dashboards | Published |
| 9 | Building Your AI Agent Ecosystem | Published |
| 10 | AI Agent Governance: Data, Privacy, Human Override | Published |
The predictions for AI in Australian business in 2026 are not theoretical anymore. Organisations that deploy AI agents for genuine operational work -- not just chatbots and content generation -- are pulling ahead.
Based on what we learned building our internal investigation agent, we built SupportAgent -- a self-hosted AI investigation tool designed for software teams.
SupportAgent brings the same investigation approach to your environment:
If your team spends hours investigating support tickets and you want to see what an AI agent can do, explore SupportAgent or book a 30-minute walkthrough to see it in action.
Related Reading:
Sources: Research synthesised from Master of Code AI Agent Statistics (2026), Gartner enterprise downtime estimates (2025), KPMG AI Pulse Q4 2025 report, Rootly incident response research (2025), and Salesmate AI agent adoption data (2026).