Skip to main content

Command Palette

Search for a command to run...

From Distributed Compute to Zero-Trust Enterprise Cloud: Unpacking AEP-83

Published
5 min read
From Distributed Compute to Zero-Trust Enterprise Cloud: Unpacking AEP-83

On centralized clouds like AWS, you rely on blind trust - hoping the provider doesn't look at your data. On a decentralized cloud like Akash, you deploy to hardware owned by independent datacenter providers. While this is incredible for cost and anti-censorship, it creates a massive roadblock for enterprises:

How do you securely deploy a proprietary AI model, healthcare data, or sensitive credentials onto a machine owned by a stranger?

AEP-83 solves this by bringing Confidential Computing to the Akash Network. By combining Trusted Execution Environments (TEEs) with Kata Containers, Akash completely eliminates the need to trust the provider.

Here is a technical breakdown of what it is, how it works, and why it changes the trajectory of the network.

1. The Core Tech: Kata Containers

To understand the solution, we must look at the fundamental flaw in standard container architecture when deployed in trustless environments.

The Problem with Traditional Containers (Docker):

Standard containers share the underlying host operating system’s kernel. They use Linux namespaces and cgroups to keep things separated. However, if a malicious provider has "root" access to the host machine, or if the shared kernel is compromised, your container's data is fully exposed.

The Solution with Kata Containers:

Kata combines the speed and OCI-compatibility of standard containers with the unbreakable isolation of Virtual Machines (VMs). Instead of sharing the host's kernel, Kata spins up a lightweight micro-VM using hardware virtualization. Your workload gets its own dedicated, isolated guest kernel.

To the developer, it deploys and feels exactly like a lightning-fast Docker container, requiring zero workflow changes.

2. Hardware - Level Cryptography (TEEs) :

Kata Containers provide the isolation, but the real cryptographic magic relies on the hardware they run on: Trusted Execution Environments (TEEs) like Intel TDX or AMD SEV-SNP.

When your Kata Container spins up on a TEE-enabled provider, your data is encrypted while it is in use (in memory). The protection applies at the VM boundary. This means even if the provider physically unplugs the RAM sticks to read the data, or uses malicious host-level admin tools to scrape your running code, all they will see is cryptographic gibberish.

3. Securing the GPU for AI Workloads :

AEP-83 ensures GPU Confidential Computing is a first-class citizen, natively integrating NVIDIA's "CC-On" (Confidential Compute) mode. The AEP dictates that the CPU TEE acts as the "trust anchor" for the GPU, securing the entire pipeline:

Encrypted Data Streams:

All data crossing the PCIe bus between the CPU and the GPU is encrypted using AES-GCM-256. This includes CUDA kernels, command buffers, and all DMA transfers.

Hardware Firewalls & Side-Channel Defense:

Inside the GPU, internal firewalls are activated, and performance counters (which hackers exploit for side-channel attacks) are completely disabled.

This means you can confidently deploy highly sensitive, proprietary models without fearing intellectual property theft.

4 . Composite Attestation (Don't Trust, Verify) :

AEP-83 applies this directly to the physical hardware through Composite Attestation.

How do you know the provider actually put your workload in a secure Kata Container and didn't just route it to a standard, unencrypted machine? You verify the hardware cryptographically

  1. From within the Kata VM, your workload collects evidence from both the CPU TEE and the NVIDIA in-guest driver.

  2. This evidence is sent to a third-party verifier (like Intel Trust Authority and NVIDIA Remote Attestation Service).

  3. The verifier checks the measurements against golden reference files and returns a cryptographically signed JWT (JSON Web Token).

  4. You mathematically verify this token to ensure the hardware is genuine, TEE is active, and your workload is locked inside before your application starts handling sensitive data.

5. Who using Kata Containers today - Enterprise validation

Akash isn't just experimenting with a fringe technology - Kata Containers (an OpenInfra Foundation project) is heavily backed and actively used in production by the biggest players in tech to solve multi-tenant security:

  • NVIDIA: NVIDIA actively leverages Kata Containers alongside their NIM (NVIDIA Inference Microservices) operator to deliver "trusted AI anywhere" at GPU scale, ensuring model IP is protected even on untrusted infrastructure.

  • Ant Group (Alibaba): The massive Chinese financial giant runs tens of thousands of tasks on Kata Containers in production. They use the micro-VM isolation to secure their Cloud Workload Protection Platform (CWPP) and ensure payment processing is strictly isolated from other workloads.

  • Red Hat & IBM: Red Hat integrates Kata into OpenShift as OpenShift Sandboxed Containers, providing extra isolation for cloud-native workloads, while IBM views Kata as the gold standard for securely isolating customer CI/CD pipelines.

  • Intel, AMD, and Apple: All three tech behemoths are major upstream contributors to the Kata codebase, with Intel and AMD specifically optimizing it to work flawlessly with their respective TEE hardware (TDX and SEV-SNP).

6. Zero-Friction Developer Experience

Perhaps the most brilliant aspect of AEP-83 is its implementation via "minimal SDL surface." You don't need to rewrite your apps, build custom infrastructure, or learn new deployment schemas.

To request confidential compute, an Akash dev (tenant) simply adds a single, opt-in attribute to their Standard Deployment List (SDL) placement profile:

attributes:
  confidential-compute: true 

That’s it. The Akash marketplace will automatically filter out normal providers, match your bid with a verified TEE-capable provider, select the correct Kata runtime class (whether CPU-only or CPU+GPU), and seamlessly package your Docker image into the secure micro-VM.

Why This Matters for Akash's Future

Up until now, massive industries - healthcare data processing, financial services, and enterprise AI research—couldn't leverage decentralized networks due to strict compliance (HIPAA, SOC2) and security risks.

By integrating Kata Containers and AEP-83, Akash fundamentally solves the decentralized trust problem. It unlocks a massive new tier of institutional and enterprise demand, proving that decentralized, permissionless infrastructure can actually be more secure and private than relying on centralized tech giants.

References :

https://github.com/akash-network/AEP/tree/main/spec/aep-83

https://katacontainers.io/docs/

https://katacontainers.io/use-cases/