·SuperBuilder Team

How to Back Up Your OpenClaw Agent and Data

openclawbackupdatadisaster recoveryself hosted

How to Back Up Your OpenClaw Agent and Data

Your OpenClaw agent accumulates valuable data over time: conversation history, memory, skills, and configuration files. Losing this data means starting from scratch — rebuilding your agent's personality, knowledge, and automation. This guide covers everything you need to back up and restore your OpenClaw installation.

OpenClaw backup strategy overview
OpenClaw backup strategy overview

What You Need to Back Up

1. Configuration Files

Location: ~/.openclaw/config.yaml

This file contains all your settings: model preferences, channel configurations, cron jobs, safety rules, and more. Losing this means reconfiguring everything from scratch.

2. SOUL.md

Location: ~/.openclaw/SOUL.md

Your agent's personality, instructions, and context. This is often the most time-consuming thing to recreate because it is refined over weeks of use.

3. Memory Database

Location: ~/.openclaw/data/memory.db

The SQLite database containing your agent's long-term memory, conversation history, and learned preferences. This is what makes your agent "know" you.

4. Skills

Location: ~/.openclaw/skills/

Installed skills and their configurations. While skills can be reinstalled, custom skill configurations and any locally modified skills should be preserved.

5. Cron Jobs

Location: Stored in memory.db but worth exporting separately.

openclaw cron export > ~/.openclaw/backups/cron-jobs.yaml

6. Environment Variables

Location: ~/.openclaw/.env or system environment.

API keys and secrets. Handle these with extra care — encrypt them separately.

7. Drive/Assets

Location: ~/.openclaw/drive/

Any files your agent has created or stored.

Quick Backup (Manual)

The simplest backup is a compressed archive of the entire OpenClaw directory:

# Stop the agent first for consistency
openclaw stop

# Create a timestamped backup
tar czf ~/openclaw-backup-$(date +%Y%m%d-%H%M%S).tar.gz \
  --exclude='*.log' \
  ~/.openclaw/

# Restart
openclaw start

This works, but it is manual and easy to forget.

Manual backup process
Manual backup process

Automated Backup Script

Here is a production-ready backup script:

#!/bin/bash
# openclaw-backup.sh — Automated OpenClaw backup

# Configuration
BACKUP_DIR="$HOME/openclaw-backups"
OPENCLAW_DIR="$HOME/.openclaw"
RETENTION_DAYS=30
MAX_BACKUPS=30
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_NAME="openclaw-$TIMESTAMP"
BACKUP_PATH="$BACKUP_DIR/$BACKUP_NAME"

# Create backup directory
mkdir -p "$BACKUP_PATH"

# Export cron jobs as YAML
openclaw cron export > "$BACKUP_PATH/cron-jobs.yaml" 2>/dev/null

# Export memory as JSON (safer than copying SQLite while running)
openclaw memory export > "$BACKUP_PATH/memory.json" 2>/dev/null

# Copy configuration files
cp "$OPENCLAW_DIR/config.yaml" "$BACKUP_PATH/"
cp "$OPENCLAW_DIR/SOUL.md" "$BACKUP_PATH/" 2>/dev/null

# Copy the database (use sqlite3 backup for consistency)
if command -v sqlite3 &>/dev/null; then
    sqlite3 "$OPENCLAW_DIR/data/memory.db" ".backup '$BACKUP_PATH/memory.db'"
else
    cp "$OPENCLAW_DIR/data/memory.db" "$BACKUP_PATH/"
fi

# Copy skills (excluding node_modules)
tar czf "$BACKUP_PATH/skills.tar.gz" \
    --exclude='node_modules' \
    -C "$OPENCLAW_DIR" skills/ 2>/dev/null

# Copy drive files
if [ -d "$OPENCLAW_DIR/drive" ]; then
    tar czf "$BACKUP_PATH/drive.tar.gz" \
        -C "$OPENCLAW_DIR" drive/
fi

# Copy environment file (encrypted)
if [ -f "$OPENCLAW_DIR/.env" ]; then
    # Encrypt with gpg if available
    if command -v gpg &>/dev/null; then
        gpg --symmetric --batch --passphrase-file ~/.backup-passphrase \
            -o "$BACKUP_PATH/env.gpg" "$OPENCLAW_DIR/.env"
    else
        cp "$OPENCLAW_DIR/.env" "$BACKUP_PATH/.env"
        echo "WARNING: .env copied unencrypted. Consider installing gpg."
    fi
fi

# Compress the entire backup
cd "$BACKUP_DIR"
tar czf "$BACKUP_NAME.tar.gz" "$BACKUP_NAME/"
rm -rf "$BACKUP_NAME/"

# Calculate size
BACKUP_SIZE=$(du -sh "$BACKUP_DIR/$BACKUP_NAME.tar.gz" | cut -f1)

# Clean up old backups
find "$BACKUP_DIR" -name "openclaw-*.tar.gz" -mtime +$RETENTION_DAYS -delete

# Keep only MAX_BACKUPS most recent
ls -t "$BACKUP_DIR"/openclaw-*.tar.gz 2>/dev/null | tail -n +$((MAX_BACKUPS + 1)) | xargs -r rm

echo "Backup complete: $BACKUP_DIR/$BACKUP_NAME.tar.gz ($BACKUP_SIZE)"

Make it executable:

chmod +x ~/scripts/openclaw-backup.sh

Schedule with System Cron

# Run backup daily at 2 AM
crontab -e
# Add this line:
0 2 * * * /home/user/scripts/openclaw-backup.sh >> /home/user/openclaw-backups/backup.log 2>&1

Schedule with OpenClaw Cron

You can also use OpenClaw itself to trigger backups:

openclaw cron add --name "daily-backup" "0 2 * * *" \
  "Run /home/user/scripts/openclaw-backup.sh and report the result."

Automated backup workflow
Automated backup workflow

Cloud Backup Options

Amazon S3

# Install AWS CLI
pip install awscli
aws configure

# Add to backup script:
aws s3 cp "$BACKUP_DIR/$BACKUP_NAME.tar.gz" \
  s3://your-bucket/openclaw-backups/

# Set lifecycle policy for auto-cleanup (30 day retention)

Cost: ~$0.02/GB/month for S3 Standard.

Backblaze B2

Cheaper than S3 at $0.005/GB/month:

# Install B2 CLI
pip install b2
b2 authorize-account your-key-id your-app-key

# Upload
b2 upload-file your-bucket "$BACKUP_DIR/$BACKUP_NAME.tar.gz" \
  "openclaw-backups/$BACKUP_NAME.tar.gz"

rsync to Another Server

# Sync backups to a remote server
rsync -avz --delete "$BACKUP_DIR/" user@backup-server:/backups/openclaw/

Google Drive (via rclone)

# Configure rclone for Google Drive
rclone config

# Upload backup
rclone copy "$BACKUP_DIR/$BACKUP_NAME.tar.gz" gdrive:openclaw-backups/

Restore Process

Full Restore

# Stop OpenClaw
openclaw stop

# Extract backup
tar xzf openclaw-20260405-020000.tar.gz
cd openclaw-20260405-020000

# Restore configuration
cp config.yaml ~/.openclaw/
cp SOUL.md ~/.openclaw/

# Restore database
cp memory.db ~/.openclaw/data/

# Restore skills
tar xzf skills.tar.gz -C ~/.openclaw/

# Restore drive
tar xzf drive.tar.gz -C ~/.openclaw/

# Restore environment
gpg --decrypt env.gpg > ~/.openclaw/.env  # If encrypted
# OR
cp .env ~/.openclaw/  # If unencrypted

# Import cron jobs
openclaw cron import cron-jobs.yaml

# Start OpenClaw
openclaw start

# Verify
openclaw status
openclaw memory stats
openclaw cron list

Partial Restore

Sometimes you only need to restore specific components:

# Restore only SOUL.md
cp backup/SOUL.md ~/.openclaw/

# Restore only memory
openclaw stop
cp backup/memory.db ~/.openclaw/data/
openclaw start

# Restore only cron jobs
openclaw cron import backup/cron-jobs.yaml

Restore to a New Server

When migrating to a new server:

  1. Install OpenClaw on the new server
  2. Copy the backup archive to the new server
  3. Follow the full restore process above
  4. Update channel webhooks (they point to the old server's URL)
  5. Update DNS if using a custom domain
  6. Test all channels

Restore process flowchart
Restore process flowchart

Docker Backup

If you run OpenClaw in Docker, back up the mounted volumes:

# Stop the container
docker compose down

# Back up volumes
tar czf openclaw-docker-backup-$(date +%Y%m%d).tar.gz \
  config/ data/ skills/

# Restart
docker compose up -d

For zero-downtime backups, use Docker volume snapshots:

# Create a snapshot while running (brief pause)
docker run --rm \
  -v openclaw_data:/source:ro \
  -v $(pwd):/backup \
  alpine tar czf /backup/data-snapshot.tar.gz -C /source .

Backup Verification

A backup is worthless if you cannot restore it. Verify your backups regularly:

#!/bin/bash
# verify-backup.sh — Test backup integrity

LATEST_BACKUP=$(ls -t ~/openclaw-backups/openclaw-*.tar.gz | head -1)

# Test archive integrity
tar tzf "$LATEST_BACKUP" > /dev/null 2>&1
if [ $? -ne 0 ]; then
    echo "FAILED: Backup archive is corrupted"
    # Send alert
    exit 1
fi

# Check expected files exist
tar tzf "$LATEST_BACKUP" | grep -q "config.yaml"
tar tzf "$LATEST_BACKUP" | grep -q "memory.db\|memory.json"
tar tzf "$LATEST_BACKUP" | grep -q "SOUL.md"

echo "PASSED: Backup verified successfully"

Schedule verification and get notified of failures via Inbounter:

openclaw cron add --name "verify-backup" "0 3 * * *" \
  "Run /home/user/scripts/verify-backup.sh. 
   If verification fails, send an urgent email via Inbounter to admin@company.com 
   with subject 'OpenClaw Backup Verification Failed'."

Backup verification checklist
Backup verification checklist

Backup Best Practices

1. Follow the 3-2-1 Rule

For OpenClaw: local backup + cloud backup + the live system itself.

2. Encrypt Sensitive Data

API keys, credentials, and personal conversation history should always be encrypted:

# Create an encryption passphrase file
openssl rand -base64 32 > ~/.backup-passphrase
chmod 600 ~/.backup-passphrase

3. Test Restores Quarterly

Set a calendar reminder to test a full restore every three months. Use a separate server or Docker container for testing.

4. Monitor Backup Size

Track backup size over time. Unexpected growth might indicate a memory leak or runaway logging:

du -sh ~/openclaw-backups/openclaw-*.tar.gz | tail -5

5. Exclude Unnecessary Files

Logs and temporary files waste backup space:

# In your backup script, exclude:
--exclude='*.log'
--exclude='tmp/'
--exclude='cache/'
--exclude='node_modules/'

6. Document Your Backup Process

Store your backup scripts and restore instructions alongside your backups. Future you (or a colleague) will thank you.

Disaster Recovery Plan

Have a written plan for when things go wrong:

  1. Detection: How will you know something went wrong? Set up health checks that alert you via Inbounter email or SMS.
  2. Assessment: Check what was lost (config only? memory? everything?).
  3. Recovery: Follow the restore process for the appropriate scope.
  4. Verification: Test all channels, cron jobs, and skills after restore.
  5. Post-mortem: Document what caused the failure and how to prevent it.

Recovery Time Objective (RTO): With proper backups, you should be able to restore a fully functional OpenClaw agent in under 30 minutes.

Frequently Asked Questions

How often should I back up?

Daily for most users. If your agent handles critical tasks, consider more frequent database backups (every 6 hours).

How much storage do backups need?

A typical OpenClaw installation is 50-200 MB. With 30-day retention of daily backups, budget 2-6 GB of storage.

Can I back up while OpenClaw is running?

For configuration and skills, yes. For the database, use openclaw memory export or sqlite3 .backup for consistency. Copying the database file while it is being written can result in corruption.

Do I need to back up skills if they are from the public registry?

Skills can be reinstalled, but custom configurations and locally modified skills should be backed up. The backup script in this guide handles this by excluding node_modules and compressing the rest.

What if I lose my API keys?

Regenerate them from your provider's dashboard (Anthropic, OpenAI, Google). This is why encrypting your .env backup is important — it is the only copy of these keys outside the provider.

Can OpenClaw back itself up?

Yes, by scheduling a backup script via OpenClaw cron. Just make sure the backup process does not require the agent to be stopped, or use the sqlite3 .backup method for the database.


Never miss a backup failure alert. Inbounter provides email and SMS APIs for AI agents — get instant notifications when your OpenClaw backups fail.

SuperBuilder

Build faster with SuperBuilder

Run parallel Claude Code agents with built-in cost tracking, task queuing, and worktree isolation. Free and open source.

Download for Mac