Architecture & Concepts
Understand how Lobstack provisions infrastructure, runs agents, and connects to AI models.
How Lobstack Works#
Lobstack deploys AI agents on dedicated cloud VMs. When you subscribe and configure an agent, Lobstack provisions a fresh Linux server, installs the agent bridge, configures your AI model, and starts the agent service. Each VM is yours alone — no shared containers or multi-tenant environments.
User Dashboard → Lobstack API → Cloud Provider (Hetzner)
↓
Dedicated VM
├── Agent Bridge (Python, port 80)
│ ├── REST API (/chat, /config, /skills, /diag)
│ ├── WebSocket (port 8765)
│ └── Tool execution engine
├── OpenClaw Gateway (port 18789, optional)
└── Agent config (/root/lobstack-config/)Dedicated Virtual Machines#
Every agent runs on its own VM provisioned on Hetzner Cloud. The VM is created with a cloud-init script that installs all dependencies, configures the agent, and starts services automatically.
| Plan | vCPU | RAM | Disk | Server Types |
|---|---|---|---|---|
| Starter | 2 | 2 GB | 50 GB | cpx11 / cx22 |
| Pro | 2 | 4 GB | 80 GB | cpx21 / cx23 |
| Performance | 4 | 8 GB | 160 GB | cpx31 / cx33 |
| Enterprise | 8 | 32 GB | 320 GB | cpx51 / cx53 |
Multi-region support
The Agent Bridge#
The bridge is a Python (aiohttp) service that runs on each VM. It serves as the communication layer between the Lobstack platform and the AI model. The bridge handles:
Chat processing
Receives messages from the dashboard, sends them to the AI model, executes tool calls, and returns responses.
Tool execution
Runs terminal commands, reads/writes files, calls external APIs (GitHub, Discord, Gmail, etc.) based on the AI model's tool-use decisions.
Config management
Receives live config pushes (SOUL.md, memory, skills) from the platform without requiring a restart.
Diagnostics
Exposes health, system metrics, service status, and logs via the /diag endpoint.
WebSocket
Real-time streaming of AI responses back to the dashboard on port 8765.
Direct Mode vs Gateway Mode#
The bridge supports two modes for calling AI models:
- Direct Mode (primary) — The bridge calls AI provider APIs directly (Anthropic, OpenAI, etc.) and handles tool execution in a loop.
- Gateway Mode (optional) — The bridge routes through an OpenClaw gateway running locally on the VM for advanced routing and model management.
Tool System#
Lobstack agents have real tool execution capabilities. When the AI model decides to use a tool, the bridge executes it on the VM and returns the result. This is not simulated — your agent can actually run commands, edit files, and call APIs.
| Tool | Description | Always Available |
|---|---|---|
| run_terminal_command | Execute shell commands on the VM | Yes |
| read_file / write_file | Read and write files on the filesystem | Yes |
| list_directory | List files in a directory | Yes |
| browse_webpage | Fetch and extract text from web pages | Yes |
| http_request | Make arbitrary HTTP requests | Yes |
| discord_*, slack_*, etc. | Integration-specific tools | When skill enabled |
| twitter_*, github_*, gmail_* | API-backed skill tools | When skill enabled |
Session Management#
Each chat session is isolated. When you create a new chat in the dashboard, the agent starts with a fresh conversation history. The bridge maintains per-session histories using an LRU cache (max 50 sessions). Older sessions are evicted automatically to manage memory.
Sessions are stored in the Lobstack database and can be renamed, deleted, or switched between from the dashboard sidebar.
Memory System#
The memory system gives your agent persistent knowledge across sessions. Memories are key-value pairs organized into categories (general, user preferences, facts, context, personality, skills). They are:
- Injected into the system prompt on every message
- Managed from the Dashboard Memory panel (with D3.js force-graph visualization)
- Pushed to the bridge in real-time when created or updated
- Refreshed on a 5-minute cycle as a safety net
Learn more