Back to Blog
    Technical

    Offline AI for Australian Business: Run Private AI When ChatGPT Is Blocked [2026 Guide]

    Feb 03, 2026By Solve8 Team18 min read

    Offline Ai Corporate Guide Local Llm

    Your IT Department Blocked ChatGPT. Now What?

    If you're reading this, there's a good chance your company has joined the growing list of organisations that have blocked access to ChatGPT, Claude, and other AI websites. You're not alone - according to a 2025 BlackBerry survey, 70% of companies now restrict access to generative AI tools, primarily to protect confidential information.

    Here's what your IT department probably didn't tell you: you can run AI completely offline on your own laptop. No internet required. No data leaving your computer. No API calls to external servers. Just you, your machine, and a capable AI assistant that lives entirely on your hard drive.

    Many corporate clients set up local AI solutions when cloud-based tools aren't an option - whether for compliance reasons, data sensitivity, or simply because IT said "no." This guide shows you exactly how to do the same thing, even if you've never touched a command line before.

    The Reality Check

    34.8% of employee ChatGPT inputs now contain sensitive data - up from just 11% in 2023. Your IT department isn't being paranoid; they're being responsible. Local AI is the solution that gives you productivity gains without the risk.


    Why Companies Block AI Tools (And Why They're Right To)

    Before we dive into solutions, let's understand the problem. When you type a question into ChatGPT, here's what happens:

    What Happens When You Use Cloud AI

    You Type
    Enter your question
    Data Sent
    Over the internet
    External Server
    Processed in USA/EU
    Stored/Logged
    Potentially retained
    Response
    Sent back to you

    The Real Risks Companies Face

    1. Data Leakage Is Not Theoretical

    In 2025, security researchers discovered over 225,000 OpenAI and ChatGPT credentials for sale on dark web markets. These weren't hacked from OpenAI - they were harvested from employee devices using infostealer malware. Once attackers log in, they gain access to the complete chat history, exposing any sensitive business data previously shared.

    Samsung learned this the hard way when employees uploaded proprietary semiconductor code to ChatGPT. Deutsche Bank blocked ChatGPT entirely while evaluating "how to best use these types of capabilities while ensuring the security of our and our client's data."

    2. Compliance Requirements

    Depending on your industry, using cloud AI may violate:

    RegulationWhat It CoversAI Risk
    GDPREU personal dataData leaving jurisdiction
    HIPAAHealthcare recordsPHI exposure
    SOC 2Security controlsUncontrolled data flows
    Privacy Act 1988 (AU)Personal informationOverseas disclosure
    Legal PrivilegeAttorney-client commsWaiver risk

    3. Intellectual Property Protection

    Every prompt you type could potentially be used to train future AI models. Even if the provider says they won't, their terms of service can change. With local AI, this risk is zero - your data never leaves your machine.


    What Is Local AI? (The Simple Explanation)

    Local AI means running artificial intelligence models directly on your computer, with no internet connection required. Think of it like this:

    How Local AI Works

    You Type
    Enter your question
    Your CPU/GPU
    Processes locally
    Local Model
    On your hard drive
    Response
    Never leaves device

    Key differences from cloud AI:

    • Privacy: Your data never leaves your computer. Ever.
    • No subscription: Download once, use forever. No monthly fees.
    • Works offline: Use it on a plane, in a bunker, wherever.
    • No rate limits: Process as much as your hardware allows.
    • Full control: Choose your model, customise behaviour, no terms of service changes.

    The trade-off? Local models are generally smaller and less capable than the massive models running on cloud servers with thousands of GPUs. But for everyday office tasks - email drafting, document summarisation, meeting notes, code assistance - they're more than capable.


    The Local AI Tools You Need to Know

    After testing dozens of options, here are the four tools that actually work for corporate users:

    Local AI Tool Comparison

    Metric
    Ease of Use
    Power
    Improvement
    LM StudioEasiest (GUI)HighBest for beginners
    OllamaEasy (CLI)HighestBest for power users
    Jan.aiVery EasyMediumMost polished UI
    GPT4AllEasyMediumBest Windows experience

    1. LM Studio - Best for Beginners

    What it is: A free desktop application that lets you download and run AI models with a ChatGPT-like interface. No coding required.

    Why I recommend it first: In my experience setting up local AI for non-technical users, LM Studio has the lowest friction. You download it, double-click to install, search for a model, click download, and start chatting. That's it.

    Key features:

    • Beautiful ChatGPT-like interface
    • Built-in model discovery (search and download from within the app)
    • Supports file uploads (PDFs, Word docs, text files)
    • Runs on Windows, macOS, and Linux
    • Free for both personal and work use
    • OpenAI-compatible API for advanced integration

    Current version: 0.3.36 (as of January 2026)

    Want a complete walkthrough? See our LM Studio Complete Beginner's Guide for step-by-step installation, interface tour, and troubleshooting tips.

    2. Ollama - Best for Power Users

    What it is: A command-line tool that makes running local AI models as simple as Docker made running containers. One command to download, one command to run.

    Why it matters: If you're comfortable with a terminal, Ollama is the most flexible and powerful option. It's also the foundation that many other tools build on.

    Key features:

    • Incredibly simple commands (ollama run llama3.2)
    • Huge model library (100+ models available)
    • Runs as a local API server for integration
    • Extremely efficient model management
    • Works on macOS, Windows, and Linux
    • Active development and community

    3. Jan.ai - Most Polished Experience

    What it is: An open-source ChatGPT alternative that runs entirely offline. Jan focuses on privacy-first design with a beautiful, modern interface.

    Why consider it: Jan was the easiest to install in my testing and has the most polished user interface. It's 100% free and open source under AGPL license.

    Key features:

    • Plug-and-play installation
    • Pre-installed starter models
    • Can also connect to cloud APIs (OpenAI, Anthropic) if needed
    • Isolated, secure environment
    • Cross-platform (Mac, Windows, Linux)

    4. GPT4All - Best for Windows Users

    What it is: A desktop application from Nomic AI designed to run large language models on consumer hardware. It's specifically optimised for accessibility.

    Why it's notable: GPT4All shines for non-technical users who want local AI. The UI is basic but functional, and models are typically 3-8GB, making them easy to download and run.

    Key features:

    • Access to 1,000+ open-source models
    • Works on Mac M-series, AMD, and NVIDIA
    • Enterprise version available ($25/device/month)
    • No internet required after download
    • MIT license (very permissive)

    Best AI Models for Everyday Office Work

    Not all models are created equal. Here's what I recommend based on hundreds of client deployments:

    Which Model Should You Download?

    What's your primary use case?
    General tasks (email, summaries, Q&A)
    → Llama 3.2 8B or Mistral 7B
    Coding and technical work
    → DeepSeek-R1-Distill-Qwen-7B or CodeLlama
    Limited hardware (<8GB RAM)
    → Phi-4-mini (3.8B) or Qwen 2.5 3B
    Complex reasoning/analysis
    → Llama 3.1 70B (if hardware allows)

    Tier 1: Best All-Round Models

    Llama 3.2 8B (Meta)

    • Size: ~5GB download
    • RAM needed: 8GB minimum, 16GB recommended
    • Best for: General office tasks, email drafting, summarisation
    • Why: The sweet spot between capability and hardware requirements. Runs smoothly on most modern laptops.

    Mistral 7B (Mistral AI)

    • Size: ~4GB download
    • RAM needed: 8GB minimum
    • Best for: Fast responses, data summarisation, email drafting
    • Why: The "workhorse" of local AI. Fast, accurate, and doesn't demand much hardware. Benchmarks show 10-15 tokens/second with 8-12GB VRAM.

    Tier 2: Small but Mighty

    Phi-4-mini (Microsoft)

    • Size: ~2.5GB download
    • Parameters: 3.8 billion
    • RAM needed: 4-8GB
    • Best for: Lightweight tasks on older hardware
    • Why: Microsoft's small language model punches above its weight. Comparable to 7-9B models in many tasks, but runs on almost anything.

    Qwen 2.5 3B (Alibaba)

    • Size: ~2GB download
    • RAM needed: 4-8GB
    • Best for: Multilingual tasks, quick responses
    • Why: Excellent for international teams. Supports 20+ languages with 128K token context.

    Tier 3: Maximum Capability

    DeepSeek-R1-Distill-Qwen-32B

    • Size: ~20GB download
    • RAM needed: 32GB+
    • Best for: Complex reasoning, analysis, coding
    • Why: Outperforms OpenAI's o1-mini on many benchmarks. State-of-the-art for open models.

    Llama 3.1 70B (Meta)

    • Size: ~40GB download
    • RAM needed: 64GB+ or dedicated GPU
    • Best for: When you need cloud-quality AI locally
    • Why: Approaches GPT-4 level performance. Requires serious hardware.

    Model Size Quick Reference

    ModelSizeRAM NeededBest For
    Phi-4-mini (3.8B)~2.5GB4-8GBLightweight tasks
    Mistral 7B~4GB8GBGeneral use, fast
    Llama 3.2 8B~5GB8-16GBAll-round best
    DeepSeek-R1 7B~5GB8-16GBReasoning, coding
    Llama 3.1 70B~40GB64GB+Maximum capability

    Hardware Requirements: Can Your Laptop Run This?

    Here's the honest truth: local AI requires decent hardware. But "decent" doesn't mean "gaming PC with three GPUs."

    Minimum Requirements

    To run smaller models (7B parameters or less):

    • RAM: 8GB minimum, 16GB recommended
    • Storage: 10GB free space for small models, 50GB+ for flexibility
    • CPU: Any modern processor from the last 5 years
    • GPU: Not required, but speeds things up significantly

    What Actually Works

    Hardware Requirements by Model Size

    Metric
    Model Class
    Hardware Needed
    Improvement
    3-7B parametersPhi-4, Mistral 7B, Llama 8B8-16GB RAM, no GPUMost laptops
    13-32B parametersLlama 13B, DeepSeek 32B32GB RAM, RTX 3070+High-end laptops
    70B parametersLlama 70B, DeepSeek 70B64GB+ RAM, RTX 4090Workstations

    Windows Laptops

    Good news: Most business laptops from the past 3-4 years can run 7B models.

    Requirements:

    • Windows 10/11
    • 16GB RAM (8GB will work but will be slow)
    • 50GB free storage
    • NVIDIA GPU optional but helpful

    My recommendation: A Dell XPS, Lenovo ThinkPad, or HP EliteBook from 2022+ with 16GB RAM will handle Mistral 7B and Llama 3.2 8B comfortably.

    MacBooks

    Good news: Apple Silicon Macs are actually excellent for local AI due to unified memory architecture.

    Requirements:

    • M1, M2, M3, or M4 chip (any variant)
    • 16GB unified memory minimum
    • 8GB works for small models

    Performance note: An M4 Pro with 64GB RAM can run Qwen 2.5 32B at 11-12 tokens/second - that's production-ready speed.

    Linux Workstations

    If you have a Linux machine, you're probably technical enough to figure this out. But briefly:

    • Same RAM requirements as Windows
    • NVIDIA GPUs with CUDA work best
    • AMD GPUs work but with less optimisation

    The Honest Reality

    If your laptop is more than 4-5 years old with 4-8GB RAM, you'll struggle. The number of useful AI models you can run locally on a 2019 laptop with 8GB RAM and no dedicated GPU is close to zero for practical purposes.

    However, even a year-old 8-billion-parameter model is something you can get running on a reasonably modern notebook.


    Step-by-Step: Installing Ollama (10 Minutes)

    Let's get you running. I'll walk you through Ollama because it's the most versatile option, and once you understand it, other tools are easier.

    Ollama Installation Timeline

    1
    2 mins
    Download & Install
    Get Ollama on your system
    2
    1 min
    Verify Installation
    Check it's working
    3
    5 mins
    Download a Model
    Get your first AI model
    4
    2 mins
    Start Using
    Ask your first question

    macOS Installation

    1. Open Terminal (press Cmd + Space, type "Terminal", press Enter)

    2. Install with Homebrew (if you have it):

    brew install ollama
    

    Or download directly from ollama.com and drag to Applications.

    1. Verify installation:
    ollama --version
    

    You should see something like ollama version 0.5.x

    Windows Installation

    1. Download the installer from ollama.com

    2. Run the installer - it's a standard "Next, Next, Finish" process

    3. Open Command Prompt (press Windows key, type "cmd", press Enter)

    4. Verify installation:

    ollama --version
    

    Linux Installation

    One command does everything:

    curl -fsSL https://ollama.ai/install.sh | sh
    

    Then verify:

    ollama --version
    

    Download Your First Model

    Now the fun part. Let's download Llama 3.2 (a solid all-purpose model):

    ollama pull llama3.2
    

    This downloads the 8B parameter version (~5GB). Wait for it to complete - it might take 5-15 minutes depending on your internet speed.

    Start Chatting

    ollama run llama3.2
    

    You'll see a prompt like >>>. Type your question:

    >>> Summarise this email in 3 bullet points: [paste your email text here]
    

    To exit, type /bye or press Ctrl+D.

    Essential Ollama Commands

    CommandWhat It Does
    ollama listShow downloaded models
    ollama run llama3.2Start chatting with a model
    ollama pull mistralDownload a new model
    ollama rm llama3.2Delete a model
    ollama serveStart the API server

    Step-by-Step: Installing LM Studio (5 Minutes)

    If command lines aren't your thing, LM Studio is even easier.

    Installation (All Platforms)

    1. Go to lmstudio.ai

    2. Download for your operating system (Windows, macOS, or Linux)

    3. Install:

      • Mac: Drag to Applications folder
      • Windows: Run the installer, click Next until done
    4. Launch LM Studio

    Download a Model

    1. Click the magnifying glass (Discover tab) in the left sidebar

    2. Search for "llama 3.2" or "mistral"

    3. Click Download on the model you want

    4. Wait for it to download (progress shows at the bottom)

    Start Chatting

    1. Click the chat bubble icon in the left sidebar

    2. Select your model from the dropdown at the top

    3. Type your question in the chat box

    4. Press Enter - you'll see the AI response stream in

    That's it. You now have a private AI assistant on your laptop.


    Real Use Cases for Office Workers

    Here's what local AI is actually good for, based on my experience deploying these tools across accounting firms, legal practices, and corporate offices:

    1. Email Drafting

    The prompt:

    Write a professional email declining a meeting request. I'm too busy this week
    but open to next week. Keep it brief and polite.
    

    Works well because: Email is formulaic, and even 7B models handle it excellently.

    2. Document Summarisation

    The prompt:

    Summarise this document in 5 key points:
    
    [Paste your document text here]
    

    Pro tip: For long documents, break them into chunks. Most local models have 8-32K token context limits (roughly 6,000-24,000 words).

    3. Meeting Notes Cleanup

    The prompt:

    Convert these rough meeting notes into a structured format with:
    - Attendees
    - Key decisions
    - Action items with owners
    - Next steps
    
    Notes: [Paste your rough notes]
    

    Why it works: Formatting and restructuring is a strength of local models.

    4. Code Assistance

    The prompt:

    Explain what this Excel formula does and suggest improvements:
    
    =IF(AND(A1>100,B1<50),VLOOKUP(C1,Data!A:B,2,FALSE),"N/A")
    

    Best models for code: DeepSeek-R1-Distill or CodeLlama variants.

    5. Data Analysis Help

    The prompt:

    I have a CSV with columns: Date, Product, Sales, Region.
    Write a Python script to:
    1. Calculate monthly sales by region
    2. Find the top 3 products
    3. Create a summary table
    

    6. Translation

    The prompt:

    Translate this email to German, maintaining professional tone:
    
    [Your email text]
    

    Best models: Qwen 2.5 (supports 20+ languages) or Mistral.

    7. Report Writing

    The prompt:

    Draft an executive summary for a quarterly report based on these points:
    - Revenue up 12% YoY
    - New client acquisitions: 47
    - Churn rate decreased from 5% to 3.2%
    - Major project delivered under budget
    

    Honest Comparison: Local AI vs Cloud AI

    I'm not going to pretend local AI is as good as GPT-4 or Claude. It isn't. Here's the honest comparison:

    Local AI vs Cloud AI: The Real Comparison

    Metric
    Local AI
    Cloud AI (GPT-4/Claude)
    Improvement
    Privacy100% privateData sent to serversLocal wins
    CostFree after hardware$20-100+/monthLocal wins
    Speed (7B model)5-15 tokens/sec50-100+ tokens/secCloud wins
    CapabilityGood for routine tasksBetter reasoning/creativityCloud wins
    AvailabilityAlways availableSubject to outagesLocal wins
    Context length8-32K tokens typical128-200K tokensCloud wins

    What Local AI Does Well

    • Routine tasks: Email drafting, formatting, simple summaries
    • Privacy-sensitive work: Anything you can't risk exposing
    • High-volume processing: No per-request costs
    • Offline scenarios: Planes, secure facilities, poor internet

    What Cloud AI Does Better

    • Complex reasoning: Multi-step analysis, nuanced judgment
    • Creative writing: More natural, less repetitive
    • Very long documents: 100+ page context
    • Latest knowledge: Training data more current
    • Speed: Much faster responses

    My Honest Recommendation

    Use local AI for:

    • First drafts of emails and documents
    • Data cleanup and formatting
    • Meeting notes and summaries
    • Code assistance and debugging
    • Any task involving sensitive data

    Use cloud AI (when you can) for:

    • Complex analysis requiring deep reasoning
    • Creative content that needs to be exceptional
    • Very long document processing
    • Tasks requiring the latest information

    IT Department Considerations

    If you want to use local AI at work, here's how to approach it responsibly:

    The Right Way to Talk to IT

    Don't say: "ChatGPT is blocked and I need it unblocked."

    Do say: "I'd like to explore local AI tools that run entirely offline with no data leaving my device. Can we discuss whether tools like Ollama or LM Studio would meet our security requirements?"

    Key Points for IT

    1. No data exfiltration: Local AI runs entirely on-device with no network calls
    2. No API keys: Nothing to secure or rotate
    3. No third-party dependencies: Works in air-gapped environments
    4. Open source: Code can be audited (Ollama, Jan, GPT4All are all open source)
    5. No additional cost: Just uses existing hardware

    Policy Considerations

    Your organisation may still need:

    • Approval process for installing software (most enterprises have this)
    • Acceptable use policy for AI tools (even local ones)
    • Guidelines on what data can be processed (some data may be restricted regardless)
    • Model vetting (which specific models are approved)

    What We Recommend to Clients

    When we help organisations implement local AI, we typically suggest:

    1. Start with a pilot: 5-10 users, specific use cases, 30-day trial
    2. Document guidelines: What's allowed, what's not, how to use responsibly
    3. Choose approved models: Stick to well-known, audited models
    4. Monitor and adjust: Gather feedback, expand if successful

    The Bottom Line: Your Productivity Shouldn't Wait for IT

    Local AI Value Proposition

    Time saved (routine tasks)5-10 hours/week
    Data leaving your deviceZero
    Monthly subscription cost$0
    Setup time10-30 minutes
    Privacy riskNone

    Your company blocked ChatGPT for good reasons - data security matters. But that doesn't mean you should be left behind while AI transforms how work gets done.

    Local AI gives you:

    • AI assistance without privacy concerns
    • Free, unlimited usage
    • Works anywhere, even offline
    • Complete control over your data

    Yes, it's not as powerful as GPT-4. Yes, it requires a decent laptop. Yes, it takes 10-30 minutes to set up.

    But once it's running, you have a private AI assistant that never shares your data, never costs extra, and never goes down because of server issues.

    My recommendation: Start with LM Studio if you want the easiest experience, or Ollama if you're comfortable with command lines. Download Llama 3.2 8B or Mistral 7B. Try it for a week on non-sensitive tasks first. You'll be surprised how capable these local models have become.


    Getting Started This Week

    Day 1: Install LM Studio or Ollama (10 minutes)

    Day 2: Download Llama 3.2 8B and test with basic prompts (15 minutes)

    Day 3: Try summarising a real document or drafting an email

    Day 4: Experiment with different models for different tasks

    Day 5: If it's working, talk to IT about formalising your use


    Need Help Deploying Local AI Across Your Organisation?

    Setting up local AI on one laptop is straightforward. Rolling it out across a team of 20, 50, or 200+ employees with proper governance, IT alignment, and compliance documentation is a different challenge entirely.

    Solve8 helps Australian businesses implement private AI infrastructure that meets enterprise security requirements while keeping data within Australian borders.

    What we offer:

    • Free AI Assessment — Understand your privacy requirements and best-fit solutions
    • Local AI Strategy — Model selection, hardware specs, and deployment planning
    • Implementation Support — We configure, deploy, and train your team
    • Compliance Documentation — Privacy Act alignment and IT policy templates

    DIY vs Solve8 Implementation

    Metric
    DIY Approach
    With Solve8
    Improvement
    Time to org-wide deployment2-4 months3-4 weeks4x faster
    IT policy alignmentResearch yourselfTemplates providedHours saved
    Model selection & testingTrial and errorExpert guidanceRight fit first time
    Staff trainingSelf-serviceIncludedFaster adoption

    Book a free 30-minute consultation →

    No sales pitch. Just honest advice on whether local AI makes sense for your organisation.


    Related Reading:

    Sources:


    Solve8 is an Australian AI consultancy helping businesses navigate the complex landscape of AI implementation. Based in Brisbane, serving clients across Australia. ABN: 84 615 983 732