A self-hosted AI tool I built to help L1/L2 support teams investigate incidents faster—without sending data to external APIs.
Transparency note: This is my own product, not a client project. I built RootCauseAI because I saw how much time support teams waste switching between Jira, git logs, databases, and documentation when investigating issues. It's now available as a self-hosted solution for teams who need data sovereignty.
When a production incident hits, L1/L2 support engineers waste hours jumping between systems. They check Jira for history, dig through git commits, query databases, read documentation—all manually. By the time they find the root cause, hours have passed.
Cloud-based AI tools could help, but many organisations (especially in regulated industries) can't send their code, logs, and database schemas to external APIs. They need something that runs entirely on-premise.
RootCauseAI is a self-hosted investigation engine that:
The key design decision was making it work entirely on-premise. Most businesses I've worked with in mining and resources can't send production data to cloud APIs. RootCauseAI runs as a Docker container using Ollama for inference, with connectors to common enterprise tools.
Languages supported: PHP, Java, C#, ASP.Net, Angular—the "boring" enterprise stack that most real businesses actually run.
Building RootCauseAI taught me that on-premise AI is viable for small teams, not just enterprises with GPU clusters. With quantised models running on Ollama, you can get useful AI assistance on a standard server. That's changed how I advise clients about data sovereignty options.
If your organisation can't use cloud AI due to data sovereignty requirements, I can help you implement self-hosted solutions using the same patterns I developed for RootCauseAI.
Discuss on-premise AI options