Sandbox Isolation
Every agent runs in a gVisor sandbox with kernel-level isolation, network segmentation, resource limits, and zero inter-agent communication.
Why gVisor?
gVisor Application Kernel#
Each agent pod runs with RuntimeClass: gvisor, which activates the runsc container runtime. gVisor interposes a user-space kernel (the "Sentry") between the container and the host kernel, providing:
Syscall Filtering
The Sentry implements Linux syscalls in user-space. Only a minimal set of host syscalls are needed, dramatically reducing the attack surface.
Kernel Isolation
Container processes never interact with the host kernel directly. Kernel vulnerabilities in the container workload cannot escape to the host.
Memory Isolation
Each sandbox has its own memory management. One container cannot read another container's memory, even with a kernel exploit.
Filesystem Isolation
The Gofer process mediates all filesystem access. Containers cannot access files outside their mount namespace.
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
overhead:
podFixed:
cpu: 100m # Small overhead for the Sentry process
memory: 64Mi
scheduling:
nodeSelector:
sandbox.gvisor.dev/enabled: "true"
tolerations:
- key: sandbox.gvisor.dev/runtime
operator: Equal
value: runsc
effect: NoScheduleNetwork Isolation#
Agent pods cannot communicate with each other. Kubernetes NetworkPolicies enforce strict ingress/egress rules at the network level, and Istio AuthorizationPolicies enforce them at the application level (belt and suspenders).
Ingress Rules#
| Source | Ports | Purpose |
|---|---|---|
| lobstack-control-plane/lobstack-api | 8080, 8765 | Health checks + chat relay |
| istio-system/* | 15006, 15001 | Istio sidecar proxy traffic |
| All other pods | DENIED | No inter-agent communication |
Egress Rules#
| Destination | Ports | Purpose |
|---|---|---|
| kube-dns | 53 (UDP/TCP) | DNS resolution |
| lobstack-api | 80 | Status callbacks, heartbeat |
| Vault | 8200 | Secret retrieval |
| Public Internet | 443 only | AI APIs (Anthropic, OpenAI, Google, xAI) |
| Private Networks (10.0.0.0/8, etc.) | BLOCKED | Cannot reach internal infrastructure |
| All other destinations | DENIED | Default deny |
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: agent-isolation
namespace: lobstack-agents
spec:
podSelector:
matchLabels:
app: lobstack-agent
policyTypes: [Ingress, Egress]
egress:
# External AI APIs only (HTTPS)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8 # Block all private ranges
- 172.16.0.0/12
- 192.168.0.0/16
- 169.254.0.0/16 # Block link-local
ports:
- port: 443
protocol: TCPIstio Authorization Layer#
On top of Kubernetes NetworkPolicies, Istio AuthorizationPolicies enforce application-layer access control using service identity (SPIFFE IDs from mTLS certificates).
# Only the control plane can reach agents
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: agent-access-control
namespace: lobstack-agents
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["lobstack-control-plane"]
principals:
- "cluster.local/ns/lobstack-control-plane/sa/lobstack-api"
to:
- operation:
ports: ["8080", "8765"]
---
# Default deny — anything not explicitly allowed is blocked
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: deny-all-default
namespace: lobstack-agents
spec: {}Resource Isolation#
Each agent is constrained to its tier's resource limits. The namespace also has ResourceQuotas and LimitRanges to prevent any single agent or group of agents from consuming all cluster resources.
| Control | Value | Enforcement |
|---|---|---|
| Per-container CPU limit | 1-8 vCPU (by tier) | Kubernetes cgroups |
| Per-container memory limit | 2-16 Gi (by tier) | Kubernetes cgroups + OOM kill |
| Workspace storage | 5-50 Gi (by tier) | emptyDir sizeLimit |
| Namespace total pods | 100 (200 in production) | ResourceQuota |
| Namespace total CPU | 64 cores (200 in production) | ResourceQuota |
| Namespace total memory | 128 Gi (400 Gi in production) | ResourceQuota |
| Min container CPU | 100m | LimitRange |
| Max container memory | 8 Gi | LimitRange |
Pod Security Context#
Every agent pod runs with a hardened security context that follows the restricted Pod Security Standard — the most strict level available in Kubernetes.
spec:
securityContext:
runAsNonRoot: true # Never run as root
runAsUser: 65534 # nobody user
runAsGroup: 65534
fsGroup: 65534
seccompProfile:
type: RuntimeDefault # OS-level syscall filtering
automountServiceAccountToken: false # No K8s API access
containers:
- securityContext:
allowPrivilegeEscalation: false # No setuid/setgid
readOnlyRootFilesystem: false # Agents need write access
capabilities:
drop: ["ALL"] # Drop ALL Linux capabilitiesNon-Root Execution
Agent processes run as UID 65534 (nobody). No root access is possible inside the container.
No Privilege Escalation
allowPrivilegeEscalation: false prevents setuid binaries and capability grants.
Capabilities Dropped
ALL Linux capabilities are dropped — no raw sockets, no ptrace, no sys_admin.
No K8s API Access
ServiceAccount token is not mounted — agents cannot interact with the Kubernetes API.
Seccomp Profile
RuntimeDefault seccomp profile filters dangerous syscalls at the OS level.
Secrets Isolation#
Agent secrets are stored in Vault with templated policies. Each agent can only read secrets at its own path — there is no way for one agent to access another agent's API keys, tokens, or configuration.
Agent A (id: abc123)
✓ Can read: secret/data/lobstack/agent/abc123
✗ Cannot read: secret/data/lobstack/agent/def456
✗ Cannot read: secret/data/lobstack/api-keys
✗ Cannot list: secret/data/lobstack/agent/
Agent B (id: def456)
✓ Can read: secret/data/lobstack/agent/def456
✗ Cannot read: secret/data/lobstack/agent/abc123
✗ Cannot read: secret/data/lobstack/api-keysDefense in Depth