You can self-host OpenClaw with Ollama for $0/month using Oracle Cloud's Always Free tier (4 ARM CPUs, 24 GB RAM, 200 GB storage). This guide walks you through the complete setup: from provisioning your server to running local AI models behind a secure HTTPS reverse proxy.
Oracle Cloud's Always Free tier offers the most generous free compute resources of any major cloud provider. With 24 GB of RAM, you can run 7B parameter models comfortably and even quantized 13B models — something impossible on AWS or Google Cloud free tiers which only offer 1 GB RAM.
Make sure you have the following ready:
If you'd rather skip the DIY process and get a security-hardened, professionally configured OpenClaw deployment with curated agent skills, Telegram integration, and Google Services — check out our managed OpenClaw setup service. We handle everything for $499.
Oracle Cloud Infrastructure (OCI) offers an Always Free tier that includes ARM-based compute instances with up to 4 OCPUs and 24 GB of RAM — more than enough to run OpenClaw with local AI models. Unlike AWS or Google Cloud free tiers, Oracle's Always Free resources never expire.
# Oracle Cloud Always Free Tier includes:
# ✓ 4 ARM Ampere A1 OCPUs (flexible allocation)
# ✓ 24 GB total RAM
# ✓ 200 GB block volume storage
# ✓ 10 TB/month outbound data transfer
# ✓ 2 Virtual Cloud Networks (VCNs)
# ✓ 1 Flexible Load Balancer (10 Mbps)
#
# Sign up at: https://cloud.oracle.com
#
# Important: Choose your home region wisely!
# Free tier resources can ONLY be created in your home region.
# Recommended: Pick a region close to you geographically.Create an ARM-based Ampere A1 instance with the maximum free tier allocation. ARM processors offer excellent performance per watt, and the 24 GB of RAM allows you to run 7B and even quantized 13B parameter AI models locally.
ARM instances on Oracle's free tier are in high demand. If provisioning fails, try again at off-peak hours (early morning UTC) or use a retry script. The instances are well worth the wait.
# After creating your instance, note the public IP address.
# SSH into your new server:
ssh -i ~/.ssh/your-private-key ubuntu@YOUR_PUBLIC_IP
# First, update the system packages:
sudo apt update && sudo apt upgrade -y
# Set the hostname (optional, but helpful):
sudo hostnamectl set-hostname openclaw-server
# Create a swap file (important for handling memory spikes):
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make swap permanent:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
# Set swappiness to 10 (use RAM first, swap as fallback):
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# Verify swap is active:
free -hBefore installing anything, lock down your server with essential security measures. This includes configuring the firewall, hardening SSH access, and installing intrusion prevention tools.
Ensure you have SSH key access working before disabling password authentication! If you lock yourself out, you'll need to use the Oracle Cloud console to access your instance.
# Install security tools:
sudo apt install -y ufw fail2ban unattended-upgrades
# Configure UFW firewall:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp # SSH
sudo ufw allow 80/tcp # HTTP (for Let's Encrypt)
sudo ufw allow 443/tcp # HTTPS
sudo ufw --force enable
# Verify firewall status:
sudo ufw status verbose
# Configure Fail2Ban for SSH protection:
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
findtime = 600
EOF
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Harden SSH configuration:
sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart sshd
# Enable automatic security updates:
sudo dpkg-reconfigure -plow unattended-upgradesDocker is the recommended way to deploy OpenClaw and Ollama. It provides isolation, easy updates, and reproducible deployments. Docker Compose v2 manages the multi-container setup with a single configuration file.
# Install Docker using the official convenience script:
curl -fsSL https://get.docker.com | sudo sh
# Add your user to the docker group:
sudo usermod -aG docker $USER
# Apply group changes (or log out and back in):
newgrp docker
# Verify Docker is installed and running:
docker --version
docker compose version
# Test with a hello-world container:
docker run --rm hello-world
# Enable Docker to start on boot:
sudo systemctl enable dockerThis is the core step. You'll create a Docker Compose configuration that runs Ollama (the local AI model server) alongside the OpenClaw web interface. The setup uses named Docker volumes for persistent data storage.
# Create project directory:
mkdir -p ~/openclaw && cd ~/openclaw
# Generate a secure secret key:
WEBUI_SECRET=$(openssl rand -base64 32)
echo "Your secret key: $WEBUI_SECRET"
echo "Save this somewhere safe!"
# Create the Docker Compose file:
cat > docker-compose.yml << 'COMPOSE'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama_data:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0:11434
restart: unless-stopped
deploy:
resources:
reservations:
memory: 4G
openclaw:
image: ghcr.io/open-webui/open-webui:main
container_name: openclaw
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
- ENABLE_SIGNUP=true
- DEFAULT_USER_ROLE=pending
volumes:
- openclaw_data:/app/backend/data
depends_on:
- ollama
ports:
- "127.0.0.1:3000:8080"
restart: unless-stopped
volumes:
ollama_data:
openclaw_data:
COMPOSE
# Create .env file with your secret key:
echo "WEBUI_SECRET_KEY=$WEBUI_SECRET" > .env
# Start the services:
docker compose up -d
# Check that both containers are running:
docker compose ps
# View logs to confirm startup:
docker compose logs -f --tail 50Nginx acts as a reverse proxy in front of OpenClaw, handling SSL termination, WebSocket connections (essential for streaming AI responses), and static asset caching. Let's Encrypt provides free, auto-renewing SSL certificates.
You MUST have a domain name pointed at your server's IP address before running certbot. DNS propagation typically takes 5-30 minutes. You can get a free domain from services like Freenom or use a subdomain from DuckDNS.
# Install Nginx and Certbot:
sudo apt install -y nginx certbot python3-certbot-nginx
# Create Nginx configuration:
sudo tee /etc/nginx/sites-available/openclaw > /dev/null << 'NGINX'
server {
listen 80;
server_name YOUR_DOMAIN;
# Let's Encrypt verification
location /.well-known/acme-challenge/ {
root /var/www/html;
}
# Redirect all HTTP to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
http2 on;
server_name YOUR_DOMAIN;
# SSL certificates (will be auto-configured by certbot)
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# WebSocket support & streaming (CRITICAL for OpenClaw)
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# IMPORTANT: Disable buffering for streaming responses
proxy_buffering off;
proxy_cache off;
# Allow long-running model responses
proxy_read_timeout 600s;
proxy_send_timeout 600s;
# Allow large file uploads (for RAG documents)
client_max_body_size 50M;
}
# Cache static assets
location ~* \.(css|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot|js)$ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
expires 7d;
add_header Cache-Control "public, immutable";
}
}
NGINX
# Enable the site:
sudo ln -sf /etc/nginx/sites-available/openclaw /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
# Test Nginx configuration:
sudo nginx -t
# Start Nginx:
sudo systemctl restart nginx
# Obtain SSL certificate (replace YOUR_DOMAIN and YOUR_EMAIL):
sudo certbot --nginx -d YOUR_DOMAIN --non-interactive --agree-tos -m YOUR_EMAIL
# Verify auto-renewal is enabled:
sudo certbot renew --dry-run
# Test your setup — visit https://YOUR_DOMAIN in your browser!With Ollama running, pull your first AI models. Start with a lightweight model for testing, then add larger models based on your available RAM. With 24 GB RAM on Oracle's free tier, you can comfortably run 7B models and quantized 13B models.
# Pull your first model (lightweight, great for testing):
docker exec -it ollama ollama pull llama3.2:3b
# Pull the recommended general-purpose model:
docker exec -it ollama ollama pull llama3.1:8b
# Optional: Pull additional models based on your needs:
docker exec -it ollama ollama pull qwen2.5:7b # Great multilingual model
docker exec -it ollama ollama pull gemma2:9b # Google's efficient model
docker exec -it ollama ollama pull phi3:mini # Microsoft's compact model
docker exec -it ollama ollama pull deepseek-r1:8b # Reasoning model
# List all downloaded models:
docker exec -it ollama ollama list
# Test a model with a quick prompt:
docker exec -it ollama ollama run llama3.1:8b "What is 2+2? Answer briefly."
# Check which models are loaded in memory:
docker exec -it ollama ollama ps
# Remove a model you no longer need:
# docker exec -it ollama ollama rm model-nameAccess your OpenClaw instance through the browser, create your admin account, configure performance settings, and optimize the deployment for the free tier hardware. The first account you create automatically becomes the administrator.
# After creating your admin account in the browser,
# optimize OpenClaw for free tier hardware.
# Stop the services temporarily:
cd ~/openclaw
docker compose down
# Update docker-compose.yml with production optimizations:
cat > docker-compose.yml << 'COMPOSE'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ollama_data:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0:11434
- OLLAMA_FLASH_ATTENTION=1
- OLLAMA_NUM_PARALLEL=2
restart: unless-stopped
deploy:
resources:
reservations:
memory: 4G
limits:
memory: 20G
openclaw:
image: ghcr.io/open-webui/open-webui:main
container_name: openclaw
environment:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
- ENABLE_SIGNUP=false
- DEFAULT_USER_ROLE=pending
- WEBUI_AUTH=true
- WEBUI_SESSION_COOKIE_SECURE=true
- WEBUI_SESSION_COOKIE_SAME_SITE=lax
# Performance optimizations
- ENABLE_REALTIME_CHAT_SAVE=false
- ENABLE_BASE_MODELS_CACHE=true
- AUDIO_STT_ENGINE=webapi
- ENABLE_AUTOCOMPLETE_GENERATION=false
volumes:
- openclaw_data:/app/backend/data
depends_on:
- ollama
ports:
- "127.0.0.1:3000:8080"
restart: unless-stopped
volumes:
ollama_data:
openclaw_data:
COMPOSE
# Restart with optimized configuration:
docker compose up -d
# Verify both services are healthy:
docker compose ps
docker compose logs --tail 20With 24 GB RAM, here are the models that work best on CPU-only ARM hardware.
| Model | Size | RAM Needed | Speed |
|---|---|---|---|
| Llama 3.2 | 1B, 3B | 2–4 GB | Fast |
| Llama 3.1Recommended | 8B | 8 GB | Medium |
| Qwen 2.5 | 7B | 8 GB | Medium |
| Gemma 2 | 9B | 10 GB | Medium |
| DeepSeek R1 | 8B | 8 GB | Medium |
| Phi-3 Mini | 3.8B | 4 GB | Fast |
All models are free and open source. Browse the full catalog at ollama.com/library
Squeeze maximum performance from your free Oracle Cloud instance.
Set AUDIO_STT_ENGINE=webapi to offload speech recognition to the browser, saving server RAM.
Set ENABLE_BASE_MODELS_CACHE=true for near-instant page loads when switching between models.
Set ENABLE_REALTIME_CHAT_SAVE=false to batch database writes and prevent I/O bottlenecks.
Set OLLAMA_FLASH_ATTENTION=1 for improved inference speed and memory efficiency on ARM.
Use Q4_K_M or Q5_K_M quantized models for the best speed/quality tradeoff on CPU-only systems.
Use small models (Llama 3.2 1B or 3B) for background tasks like title generation and tagging.
Do not skip these. A misconfigured OpenClaw instance can expose your data and server. For an in-depth security guide, read our OpenClaw Security Hardening Guide.
Common issues and their solutions.
Everything you need to know about self-hosting OpenClaw for free.
Explore more OpenClaw resources and guides.
Enterprise-grade security for your OpenClaw deployment. RAK framework, compliance, and audit scripts.
Browse 24+ pre-built skills — email, calendar, research, coding, and more. All security-vetted.
Professional setup with security hardening, skill curation, Telegram integration, and ongoing support.
Calculate the return on investment from deploying OpenClaw for your team or business.
Browse hundreds of open-source AI models available for local deployment with Ollama.
Official documentation covering features, configuration, APIs, pipelines, and troubleshooting.
OpenClaw and Ollama are actively developed with frequent updates, security patches, and new features. Updating is simple with Docker:
# Pull latest images and restart:
cd ~/openclaw
docker compose pull
docker compose up -d
# Check running versions:
docker compose ps
# View update logs:
docker compose logs --tail 20Your conversations, settings, and downloaded models are stored in Docker volumes and persist across updates.
Our professional setup includes everything in this guide, plus security hardening with our RAK framework, curated agent skills, Telegram integration, Google Services, and ongoing support.
Experts in AI agent deployment and security
This guide is updated regularly to reflect the latest OpenClaw versions, security patches, and best practices. Last updated February 6, 2026.