PrivateBox
security.

A more controlled way to deploy AI for confidential work.

Most AI vendors secure their platforms. PrivateBox is built for organisations that want to secure the deployment boundary itself — with ring-fenced infrastructure, governed access, private inference, and tighter control over how sensitive AI workloads operate.

Ring-fenced deployment Controlled access Encryption in transit Private inference Governance by design
Public AI General productivity
Hosted enterprise AI Vendor-hosted security
PrivateBox Controlled boundary

Security starts.
with the boundary.

AI security is not only about encryption, policies, or vendor promises. It is also about where the system runs, who controls the environment, how access is governed, and whether confidential work remains inside a boundary your organisation understands and controls.

PrivateBox is built on that principle — security as architecture, not an add-on.

Security is not something we bolt on. It is the reason PrivateBox exists. Every architectural choice — where inference runs, how access is governed, how knowledge is indexed, how deployment is shaped — starts from the same question: does the boundary stay under the organisation's control?

Most enterprise AI tools offer serious cloud security controls. PrivateBox goes further by changing the deployment boundary itself.

Boundary
Control
Governance
Isolation
Flexibility
Accountability

Why AI security.
is different.

Traditional software security focuses on user access, system hardening, and application controls. AI adds new trust questions. Prompts may contain sensitive operational context. Uploaded files may contain confidential records. Retrieval systems can surface internal knowledge at scale.

Why AI adoption needs more than "enterprise-grade SaaS" — it needs a clear model of privacy, access, deployment, and data boundary.

Prompts carry context

Every prompt can contain operational detail — customer names, financials, internal strategy. Sending that through a public AI surface changes who can theoretically see it.

Files can be confidential records

Uploaded documents often contain privileged, regulated, or competitively sensitive material. The deployment path for those files matters as much as the model that reads them.

Retrieval surfaces knowledge at scale

Enterprise search and RAG systems can expose internal knowledge far faster than a human ever could. That makes per-role visibility and source control critical.

The model sits inside someone's boundary

Public and hosted AI services live inside the vendor's infrastructure. That is not inherently unsafe — but it is a different trust boundary from a ring-fenced, client-aligned environment.

Automation amplifies everything

Agents, connectors, and structured actions scale both good controls and weak ones. Governance has to live inside the workflow, not as a policy document on the side.

Architecture.
shown clearly.

A layered security model built around infrastructure control, governed access, private inference, and a more explicit data boundary. Each layer plays a distinct role in protecting confidential work.

Five boundaries. One controlled environment.

Layer 01 Infrastructure Boundary
Client-controlled hardware Approved dedicated hardware Internal networking Deployment isolation Controlled ingress path
Layer 02 Access Boundary
Role-based access Governed permissions Team separation Administrative control Scoped visibility
Layer 03 Data Boundary
Confidential prompts Internal files Private knowledge retrieval Controlled connectors Client-owned context
Layer 04 AI Operating Boundary
Private inference Agent controls Governed actions Internal workflows Controlled orchestration
Layer 05 Oversight Boundary
Audit-oriented design Usage visibility direction Policy-based growth Future-proofing

Core controls.
built in.

A set of capabilities designed around private deployment, governed access, and confidential work. Tap any card to switch into a more technical view.

Ten controls. One controlled environment.

Different models.
different boundaries.

Hosted enterprise AI can offer strong cloud security controls. PrivateBox is built for organisations that want the environment itself to sit inside a more controlled deployment boundary.

Two legitimate models — different places to draw the line.

Typical hosted AI model

Vendor hosted service boundary

  • Vendor-hosted service with strong cloud controls
  • Encryption and enterprise privacy commitments
  • Data usually remains inside the provider's service boundary
  • Security depends partly on vendor architecture and contracts
  • Organisation controls usage, but not the full deployment boundary
PrivateBox deployment model

Ring fenced operating environment

  • Ring-fenced environment with tighter infrastructure control
  • Client-aligned or client-controlled deployment boundary
  • Governed internal knowledge and workflows
  • More explicit control over how AI is introduced into confidential work
  • Architecture designed around boundary control — not only cloud controls

How PrivateBox.
compares.

A practical comparison across deployment boundary, data control, knowledge handling, access governance, and vendor dependency.

Not "good vs bad" — a view of best fit across different models.

Area Public AI tools Hosted enterprise AI PrivateBox
Deployment boundary Vendor / public environment Vendor cloud boundary Controlled or client-aligned boundary
Data control Lowest by default; depends on plan and policy Stronger contractual and enterprise controls Strongest fit when the environment itself is ring-fenced
Internal knowledge handling Often limited or risky for confidential material Improved controls, still provider-hosted Designed around internal context staying in a controlled deployment
Access governance Basic to moderate depending on plan Enterprise-grade controls often available Governance designed into the operating layer
Vendor dependency High Moderate to high Lower — deployment flexibility and open-model direction reduce lock-in
Compliance coverage Rarely addressed; compliance sits entirely with the user Strong certifications (SOC 2, ISO) — but bound to the vendor's service boundary, not your full stack No provider covers every regime on your behalf. PrivateBox's goal is to give you the architecture to get closer to full compliance — by owning the ecosystem end-to-end
Best fit General productivity Enterprise productivity with provider-managed security Confidential AI workloads with stronger boundary-control needs

Security levels.
in the market.

The AI market spans several security models. Some focus on usability and productivity. Some add enterprise-grade privacy and compliance. Very few are built primarily around a tighter deployment boundary.

A calm, matter-of-fact view of where different models sit.

Level 01 Consumer and public AI tools

Powerful, widely available, and generally the weakest fit for confidential work unless usage is tightly restricted and plan-specific controls are clearly understood.

Level 02 Enterprise hosted AI platforms

Major vendors publicly describe enterprise privacy commitments, encryption, identity controls, and compliance programs for their business offerings.

These platforms offer serious security — but they remain provider-hosted services running inside the vendor's cloud boundary.

Level 03 Private boundary controlled AI

This is where PrivateBox sits. The point is not that hosted enterprise AI lacks security. The point is that PrivateBox is built for organisations that want the AI environment itself to sit inside a more controlled deployment boundary.

OpenAI Business data not used for training by default; SOC 2 Type 2 and ISO certifications.
Anthropic AES-256 GCM at rest, TLS 1.2+ in transit; SOC 2 and ISO 27001 listed.
Microsoft Copilot Enterprise data protection, tenant isolation within the Microsoft 365 service boundary.
Harvey Default no-training, encryption at rest and in transit; annual SOC 2 Type II and ISO 27001.

Vendor descriptions above reflect their public positioning. Each is a strong example of hosted enterprise AI security — and each operates primarily within the provider's service boundary. PrivateBox is for organisations that want a tighter, more controlled environment for confidential work.

South African flag South Africa POPIA and third party processing

POPIA and the cost of too many third parties

For South African organisations, AI risk is often less about the model itself and more about the processing chain around it.

POPIA keeps accountability with the responsible party — your organisation — even when personal information passes through operators, cloud services, external vendors, or cross-border systems. The more third parties involved in the AI stack, the more contracts, oversight, governance, and legal questions your business inherits.

That is where deployment matters.

PrivateBox is designed to reduce that complexity. By keeping inference, retrieval, knowledge, and workflow execution inside a more controlled environment, it gives South African businesses a cleaner path to privacy-oriented AI adoption with fewer external dependencies and a clearer accountability boundary.

Built for a cleaner POPIA posture

  • Keep accountability closer to home

    Your organisation remains responsible. PrivateBox helps align the AI layer more closely to your own environment instead of defaulting to an external operator model.

  • Reduce third party exposure

    Fewer external services means fewer processing relationships to assess, govern, and explain internally.

  • Avoid unnecessary cross border complexity

    Where policy requires it, PrivateBox supports keeping sensitive AI activity closer to the jurisdiction and infrastructure boundary your organisation is comfortable with.

  • Control the deployment boundary

    PrivateBox is built around a ring-fenced model that gives your business more direct control over where confidential AI work happens.

  • Support compliance by design

    PrivateBox does not replace legal compliance work. It supports it by giving your organisation a more controlled technical foundation for handling sensitive information.

Governance.
in the product.

PrivateBox is not designed as an unrestricted AI playground. It is a governed operating layer where access, workflows, connectors, and internal knowledge can be introduced in a structured, accountable way.

Product-led governance — not a policy document bolted on.

Role based access

Define who can reach which tools, knowledge, and actions — scoped by role, not by trust.

Controlled visibility

Per-role visibility across content, data, and retrieval — same platform, different views.

Scoped knowledge access

Internal knowledge is indexed and retrieved through governed scopes — not an open pool.

Structured workflows

Typed actions and review paths encode how teams work — not just freeform prompting.

Connector governance

Internal system connections are scoped, intentional, and reviewable — never open-ended.

Audit oriented direction

Usage visibility and operational accountability built into the platform's direction, not bolted on.

Security maturity.
stated clearly.

PrivateBox is security-led by architecture — but not every long-term control should be described as fully complete if it is still being matured. What we describe as available is available. What is still being expanded is called out honestly.

Current controls and roadmap items — separated, not merged.

Available now

What's already in place

  • Ring-fenced deployment approach
  • Controlled infrastructure boundary
  • Encryption in transit
  • Role-based access
  • Private inference model
  • Governance-oriented design
  • Controlled connectors
  • Support for confidential knowledge workflows
Coming next

What's being matured

  • Deeper encryption maturity across the stack
  • Broader at-rest coverage where still being expanded
  • End-to-end encryption where still roadmap
  • Deeper audit and control surfaces
  • Further compliance maturity and formalisation

Built for compliance.
by design.

PrivateBox is designed for organisations that need a more controlled AI environment in contexts where confidentiality, privacy, governance, and internal boundaries matter.

Compliance-ready — with formal attestations described honestly as they are earned.

Privacy-oriented, governance-aware, and designed to support regulated environments. Deeper certifications and formal attestations are scoped to the compliance roadmap rather than overclaimed today.

View Product Overview

See PrivateBox
in action.

Explore how PrivateBox handles confidential AI workflows inside a more controlled deployment model built around privacy, governance, and internal boundary control.