OpenClawSelf-Hosting Guide
Free Self-Hosting Guide — 2026 Edition

Self-Host OpenClaw on aFree VPS — Step by Step

You can self-host OpenClaw with Ollama for $0/month using Oracle Cloud's Always Free tier (4 ARM CPUs, 24 GB RAM, 200 GB storage). This guide walks you through the complete setup: from provisioning your server to running local AI models behind a secure HTTPS reverse proxy.

$0/month hosting
~45 min setup
8 steps
Beginner-friendly

Why Oracle Cloud Free Tier?

Oracle Cloud's Always Free tier offers the most generous free compute resources of any major cloud provider. With 24 GB of RAM, you can run 7B parameter models comfortably and even quantized 13B models — something impossible on AWS or Google Cloud free tiers which only offer 1 GB RAM.

4
ARM OCPUs
24 GB
RAM
200 GB
SSD Storage
10 TB
Monthly Transfer
2
VCNs
$0
Forever

Before You Start

Make sure you have the following ready:

  • Credit/Debit CardFor Oracle Cloud identity verification (not charged)
  • Domain NameFor HTTPS access (free options: DuckDNS, Freenom)
  • SSH ClientTerminal (Mac/Linux) or PuTTY (Windows)
  • Basic Terminal KnowledgeComfortable running commands and editing files

Prefer a Professional Setup?

If you'd rather skip the DIY process and get a security-hardened, professionally configured OpenClaw deployment with curated agent skills, Telegram integration, and Google Services — check out our managed OpenClaw setup service. We handle everything for $499.

Step 1 of 8

Create Your Oracle Cloud Free Account

Oracle Cloud Infrastructure (OCI) offers an Always Free tier that includes ARM-based compute instances with up to 4 OCPUs and 24 GB of RAM — more than enough to run OpenClaw with local AI models. Unlike AWS or Google Cloud free tiers, Oracle's Always Free resources never expire.

What to do:

  • Go to cloud.oracle.com and click "Start for Free"
  • You'll need a valid credit card for identity verification (you won't be charged)
  • Select your home region carefully — free tier resources must be created here
  • Complete email verification and set up your tenancy
  • The free tier includes: 4 ARM OCPUs, 24 GB RAM, 200 GB block storage, 10 TB/month outbound data
Terminal
# Oracle Cloud Always Free Tier includes:
# ✓ 4 ARM Ampere A1 OCPUs (flexible allocation)
# ✓ 24 GB total RAM
# ✓ 200 GB block volume storage
# ✓ 10 TB/month outbound data transfer
# ✓ 2 Virtual Cloud Networks (VCNs)
# ✓ 1 Flexible Load Balancer (10 Mbps)
#
# Sign up at: https://cloud.oracle.com
#
# Important: Choose your home region wisely!
# Free tier resources can ONLY be created in your home region.
# Recommended: Pick a region close to you geographically.
Step 2 of 8

Provision Your ARM Compute Instance

Create an ARM-based Ampere A1 instance with the maximum free tier allocation. ARM processors offer excellent performance per watt, and the 24 GB of RAM allows you to run 7B and even quantized 13B parameter AI models locally.

ARM instances on Oracle's free tier are in high demand. If provisioning fails, try again at off-peak hours (early morning UTC) or use a retry script. The instances are well worth the wait.

What to do:

  • Navigate to Compute → Instances → Create Instance
  • Choose "Ampere" (ARM) as the shape — select VM.Standard.A1.Flex
  • Allocate 4 OCPUs and 24 GB RAM (maximum free tier)
  • Select Ubuntu 22.04 or 24.04 as the OS image
  • Set boot volume to 100 GB (up to 200 GB free) and add your SSH public key
  • Configure VCN security list to allow ports 22 (SSH), 80 (HTTP), and 443 (HTTPS)
Terminal
# After creating your instance, note the public IP address.
# SSH into your new server:
ssh -i ~/.ssh/your-private-key ubuntu@YOUR_PUBLIC_IP

# First, update the system packages:
sudo apt update && sudo apt upgrade -y

# Set the hostname (optional, but helpful):
sudo hostnamectl set-hostname openclaw-server

# Create a swap file (important for handling memory spikes):
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Make swap permanent:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# Set swappiness to 10 (use RAM first, swap as fallback):
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

# Verify swap is active:
free -h
Step 3 of 8

Secure Your Server

Before installing anything, lock down your server with essential security measures. This includes configuring the firewall, hardening SSH access, and installing intrusion prevention tools.

Ensure you have SSH key access working before disabling password authentication! If you lock yourself out, you'll need to use the Oracle Cloud console to access your instance.

What to do:

  • Configure UFW firewall to only allow SSH, HTTP, and HTTPS traffic
  • Disable password-based SSH authentication (use keys only)
  • Install Fail2Ban to block brute-force SSH attempts
  • Enable automatic security updates to stay patched
  • Disable root SSH login for additional security
Terminal
# Install security tools:
sudo apt install -y ufw fail2ban unattended-upgrades

# Configure UFW firewall:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp    # SSH
sudo ufw allow 80/tcp    # HTTP (for Let's Encrypt)
sudo ufw allow 443/tcp   # HTTPS
sudo ufw --force enable

# Verify firewall status:
sudo ufw status verbose

# Configure Fail2Ban for SSH protection:
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
findtime = 600
EOF

sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Harden SSH configuration:
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart sshd

# Enable automatic security updates:
sudo dpkg-reconfigure -plow unattended-upgrades
Step 4 of 8

Install Docker & Docker Compose

Docker is the recommended way to deploy OpenClaw and Ollama. It provides isolation, easy updates, and reproducible deployments. Docker Compose v2 manages the multi-container setup with a single configuration file.

What to do:

  • Install Docker Engine using the official repository (not the Ubuntu snap package)
  • Docker Compose v2 is included with modern Docker Engine installations
  • Add your user to the docker group so you don't need sudo for every command
  • Verify the installation with a test container
Terminal
# Install Docker using the official convenience script:
curl -fsSL https://get.docker.com | sudo sh

# Add your user to the docker group:
sudo usermod -aG docker $USER

# Apply group changes (or log out and back in):
newgrp docker

# Verify Docker is installed and running:
docker --version
docker compose version

# Test with a hello-world container:
docker run --rm hello-world

# Enable Docker to start on boot:
sudo systemctl enable docker
Step 5 of 8

Deploy OpenClaw with Docker Compose

This is the core step. You'll create a Docker Compose configuration that runs Ollama (the local AI model server) alongside the OpenClaw web interface. The setup uses named Docker volumes for persistent data storage.

What to do:

  • Create a project directory and Docker Compose configuration file
  • The compose file defines two services: Ollama (model server) and OpenClaw (web UI)
  • Ollama runs locally and handles AI model inference on your ARM CPU
  • OpenClaw connects to Ollama and provides a beautiful web interface
  • Data is stored in named Docker volumes that persist across container restarts
  • Generate a strong secret key for session encryption
Terminal
# Create project directory:
mkdir -p ~/openclaw && cd ~/openclaw

# Generate a secure secret key:
WEBUI_SECRET=$(openssl rand -base64 32)
echo "Your secret key: $WEBUI_SECRET"
echo "Save this somewhere safe!"

# Create the Docker Compose file:
cat > docker-compose.yml << 'COMPOSE'
services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          memory: 4G

  openclaw:
    image: ghcr.io/open-webui/open-webui:main
    container_name: openclaw
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
      - WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
      - ENABLE_SIGNUP=true
      - DEFAULT_USER_ROLE=pending
    volumes:
      - openclaw_data:/app/backend/data
    depends_on:
      - ollama
    ports:
      - "127.0.0.1:3000:8080"
    restart: unless-stopped

volumes:
  ollama_data:
  openclaw_data:
COMPOSE

# Create .env file with your secret key:
echo "WEBUI_SECRET_KEY=$WEBUI_SECRET" > .env

# Start the services:
docker compose up -d

# Check that both containers are running:
docker compose ps

# View logs to confirm startup:
docker compose logs -f --tail 50
Step 6 of 8

Set Up Nginx Reverse Proxy & Free SSL

Nginx acts as a reverse proxy in front of OpenClaw, handling SSL termination, WebSocket connections (essential for streaming AI responses), and static asset caching. Let's Encrypt provides free, auto-renewing SSL certificates.

You MUST have a domain name pointed at your server's IP address before running certbot. DNS propagation typically takes 5-30 minutes. You can get a free domain from services like Freenom or use a subdomain from DuckDNS.

What to do:

  • Install Nginx and Certbot (Let's Encrypt client)
  • Point your domain's DNS A record to your server's public IP
  • Configure Nginx with WebSocket support (critical for streaming responses)
  • Disable proxy buffering to prevent garbled AI responses
  • Obtain a free SSL certificate from Let's Encrypt
  • Set up automatic certificate renewal (certificates expire every 90 days)
Terminal
# Install Nginx and Certbot:
sudo apt install -y nginx certbot python3-certbot-nginx

# Create Nginx configuration:
sudo tee /etc/nginx/sites-available/openclaw > /dev/null << 'NGINX'
server {
    listen 80;
    server_name YOUR_DOMAIN;

    # Let's Encrypt verification
    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    # Redirect all HTTP to HTTPS
    location / {
        return 301 https://$host$request_uri;
    }
}

server {
    listen 443 ssl;
    http2 on;
    server_name YOUR_DOMAIN;

    # SSL certificates (will be auto-configured by certbot)
    ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    # WebSocket support & streaming (CRITICAL for OpenClaw)
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # IMPORTANT: Disable buffering for streaming responses
        proxy_buffering off;
        proxy_cache off;

        # Allow long-running model responses
        proxy_read_timeout 600s;
        proxy_send_timeout 600s;

        # Allow large file uploads (for RAG documents)
        client_max_body_size 50M;
    }

    # Cache static assets
    location ~* \.(css|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot|js)$ {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        expires 7d;
        add_header Cache-Control "public, immutable";
    }
}
NGINX

# Enable the site:
sudo ln -sf /etc/nginx/sites-available/openclaw /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default

# Test Nginx configuration:
sudo nginx -t

# Start Nginx:
sudo systemctl restart nginx

# Obtain SSL certificate (replace YOUR_DOMAIN and YOUR_EMAIL):
sudo certbot --nginx -d YOUR_DOMAIN --non-interactive --agree-tos -m YOUR_EMAIL

# Verify auto-renewal is enabled:
sudo certbot renew --dry-run

# Test your setup — visit https://YOUR_DOMAIN in your browser!
Step 7 of 8

Download AI Models

With Ollama running, pull your first AI models. Start with a lightweight model for testing, then add larger models based on your available RAM. With 24 GB RAM on Oracle's free tier, you can comfortably run 7B models and quantized 13B models.

What to do:

  • Use 'docker exec' to run Ollama commands inside the container
  • Start with Llama 3.2 3B (lightweight, fast, great for testing)
  • Add Llama 3.1 8B for general-purpose use (best quality-to-size ratio)
  • Try Qwen 2.5 or Gemma 2 for variety and multilingual support
  • Use Q4_K_M quantization for the best speed/quality tradeoff on ARM
  • Each 7B model uses approximately 4-5 GB of disk space
Terminal
# Pull your first model (lightweight, great for testing):
docker exec -it ollama ollama pull llama3.2:3b

# Pull the recommended general-purpose model:
docker exec -it ollama ollama pull llama3.1:8b

# Optional: Pull additional models based on your needs:
docker exec -it ollama ollama pull qwen2.5:7b      # Great multilingual model
docker exec -it ollama ollama pull gemma2:9b         # Google's efficient model
docker exec -it ollama ollama pull phi3:mini         # Microsoft's compact model
docker exec -it ollama ollama pull deepseek-r1:8b    # Reasoning model

# List all downloaded models:
docker exec -it ollama ollama list

# Test a model with a quick prompt:
docker exec -it ollama ollama run llama3.1:8b "What is 2+2? Answer briefly."

# Check which models are loaded in memory:
docker exec -it ollama ollama ps

# Remove a model you no longer need:
# docker exec -it ollama ollama rm model-name
Step 8 of 8

Configure & Optimize OpenClaw

Access your OpenClaw instance through the browser, create your admin account, configure performance settings, and optimize the deployment for the free tier hardware. The first account you create automatically becomes the administrator.

What to do:

  • Visit https://YOUR_DOMAIN in your browser
  • Create your admin account (first signup gets admin privileges)
  • Disable public signups after creating your account
  • Configure performance settings for ARM + limited resources
  • Enable caching to speed up model and page loading
  • Set up task models to use lightweight models for background tasks
Terminal
# After creating your admin account in the browser,
# optimize OpenClaw for free tier hardware.

# Stop the services temporarily:
cd ~/openclaw
docker compose down

# Update docker-compose.yml with production optimizations:
cat > docker-compose.yml << 'COMPOSE'
services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
      - OLLAMA_FLASH_ATTENTION=1
      - OLLAMA_NUM_PARALLEL=2
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          memory: 4G
        limits:
          memory: 20G

  openclaw:
    image: ghcr.io/open-webui/open-webui:main
    container_name: openclaw
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
      - WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
      - ENABLE_SIGNUP=false
      - DEFAULT_USER_ROLE=pending
      - WEBUI_AUTH=true
      - WEBUI_SESSION_COOKIE_SECURE=true
      - WEBUI_SESSION_COOKIE_SAME_SITE=lax
      # Performance optimizations
      - ENABLE_REALTIME_CHAT_SAVE=false
      - ENABLE_BASE_MODELS_CACHE=true
      - AUDIO_STT_ENGINE=webapi
      - ENABLE_AUTOCOMPLETE_GENERATION=false
    volumes:
      - openclaw_data:/app/backend/data
    depends_on:
      - ollama
    ports:
      - "127.0.0.1:3000:8080"
    restart: unless-stopped

volumes:
  ollama_data:
  openclaw_data:
COMPOSE

# Restart with optimized configuration:
docker compose up -d

# Verify both services are healthy:
docker compose ps
docker compose logs --tail 20
Model Guide

Recommended AI Models for Free Tier

With 24 GB RAM, here are the models that work best on CPU-only ARM hardware.

ModelSizeRAM NeededSpeed
Llama 3.21B, 3B2–4 GBFast
Llama 3.1Recommended8B8 GBMedium
Qwen 2.57B8 GBMedium
Gemma 29B10 GBMedium
DeepSeek R18B8 GBMedium
Phi-3 Mini3.8B4 GBFast

All models are free and open source. Browse the full catalog at ollama.com/library

Optimization

Performance Tips for Free Tier

Squeeze maximum performance from your free Oracle Cloud instance.

Use Browser Speech-to-Text

Set AUDIO_STT_ENGINE=webapi to offload speech recognition to the browser, saving server RAM.

Enable Model Caching

Set ENABLE_BASE_MODELS_CACHE=true for near-instant page loads when switching between models.

Disable Real-time Chat Save

Set ENABLE_REALTIME_CHAT_SAVE=false to batch database writes and prevent I/O bottlenecks.

Use Flash Attention

Set OLLAMA_FLASH_ATTENTION=1 for improved inference speed and memory efficiency on ARM.

Q4_K_M Quantization

Use Q4_K_M or Q5_K_M quantized models for the best speed/quality tradeoff on CPU-only systems.

Lightweight Task Models

Use small models (Llama 3.2 1B or 3B) for background tasks like title generation and tagging.

Critical

Security Checklist

Do not skip these. A misconfigured OpenClaw instance can expose your data and server. For an in-depth security guide, read our OpenClaw Security Hardening Guide.

Disable public signups after creating your admin account (ENABLE_SIGNUP=false)
Set a strong WEBUI_SECRET_KEY using openssl rand -base64 32
Enable secure session cookies (WEBUI_SESSION_COOKIE_SECURE=true)
Configure UFW firewall — only allow ports 22, 80, 443
Block direct access to Ollama port 11434 (Docker internal only)
Block direct access to OpenClaw port 3000 (Nginx handles external traffic)
Use SSH key authentication only — disable password login
Install and configure Fail2Ban for brute-force protection
Enable automatic security updates with unattended-upgrades
Keep Docker images updated: docker compose pull && docker compose up -d
Set DEFAULT_USER_ROLE=pending so new users require admin approval
Back up your Docker volumes regularly
Never expose Ollama to the public internet (it has no built-in authentication)
Consider Tailscale or WireGuard for additional access control
Help

Troubleshooting

Common issues and their solutions.

Frequently Asked Questions

Everything you need to know about self-hosting OpenClaw for free.

Keeping OpenClaw Updated

OpenClaw and Ollama are actively developed with frequent updates, security patches, and new features. Updating is simple with Docker:

# Pull latest images and restart:
cd ~/openclaw
docker compose pull
docker compose up -d

# Check running versions:
docker compose ps

# View update logs:
docker compose logs --tail 20

Your conversations, settings, and downloaded models are stored in Docker volumes and persist across updates.

🦞

Want the Full OpenClaw Experience?

Our professional setup includes everything in this guide, plus security hardening with our RAK framework, curated agent skills, Telegram integration, Google Services, and ongoing support.

Learn About Managed Setup — $499

Written by Cognio Labs

Experts in AI agent deployment and security

This guide is updated regularly to reflect the latest OpenClaw versions, security patches, and best practices. Last updated February 6, 2026.