·SuperBuilder Team

OpenClaw SQL Toolkit: Database Automation for SQLite, PostgreSQL & MySQL

openclawsqldatabasepostgresqlmysqlsqliteautomationai agent

OpenClaw SQL Toolkit: Database Automation for SQLite, PostgreSQL & MySQL

Databases sit at the center of nearly every application, yet interacting with them programmatically through AI agents has traditionally been clunky --- limited to simple SELECT queries or dangerous unguarded write access. The OpenClaw SQL Toolkit changes this. It gives your AI agent structured, safe access to SQLite, PostgreSQL, and MySQL databases with support for everything from simple lookups to complex joins, window functions, CTEs, migration scripts, and query performance analysis via EXPLAIN.

This is not a toy SQL wrapper. It is a full-featured database interaction layer designed for production use.

OpenClaw SQL Toolkit overview showing supported databases and query types
OpenClaw SQL Toolkit overview showing supported databases and query types

What the SQL Toolkit Does

The skill provides your OpenClaw agent with database capabilities across three axes:

Querying

Database Management

Performance Analysis

The toolkit abstracts database-specific syntax differences where possible while still allowing database-specific features when needed (e.g., PostgreSQL's JSONB operators, MySQL's GROUP_CONCAT).

How to Install

openclaw skill install sql-toolkit

Database Driver Dependencies

Depending on which databases you use, you may need to install additional drivers:

# PostgreSQL
npm install pg

# MySQL
npm install mysql2

# SQLite (usually bundled)
# No additional installation needed

Setup and Configuration

Basic Configuration

{
  "sql-toolkit": {
    "connections": {
      "main_db": {
        "type": "postgresql",
        "host": "localhost",
        "port": 5432,
        "database": "myapp",
        "username": "agent_user",
        "password": "secure_password",
        "ssl": true
      },
      "analytics": {
        "type": "mysql",
        "host": "analytics.internal",
        "port": 3306,
        "database": "analytics",
        "username": "readonly",
        "password": "secure_password"
      },
      "local": {
        "type": "sqlite",
        "path": "/data/app.db"
      }
    },
    "safety": {
      "read_only": false,
      "allow_ddl": false,
      "allow_delete": false,
      "max_rows_returned": 1000,
      "query_timeout_seconds": 30,
      "blocked_tables": ["users_credentials", "api_keys"],
      "require_where_on_update": true
    },
    "features": {
      "explain_analysis": true,
      "migration_generation": true,
      "schema_caching": true,
      "query_history": true
    }
  }
}

Safety Configuration in Detail

The safety settings deserve careful attention. Giving an AI agent database write access is a significant decision.

read_only --- When true, the agent can only execute SELECT queries. Start here if you are unsure.

allow_ddl --- Controls whether the agent can execute Data Definition Language statements (CREATE TABLE, ALTER TABLE, DROP TABLE). Keep this false unless the agent is specifically managing migrations.

allow_delete --- Separately controls DELETE statements. Even when read_only is false, you can prevent deletions.

blocked_tables --- Tables the agent cannot query or modify under any circumstances. Always block tables containing credentials, API keys, personal data, or anything subject to compliance restrictions.

require_where_on_update --- When true, UPDATE and DELETE statements must include a WHERE clause. This prevents accidental UPDATE users SET active = false (without WHERE) from affecting every row.

Connection Security Best Practices

  1. Create a dedicated database user for the agent with minimal permissions
  2. Use read-only replicas when the agent only needs to query data
  3. Enable SSL for all non-localhost connections
  4. Never use the root/admin database user for agent connections
  5. Rotate credentials regularly and store them in environment variables rather than config files

Database connection configuration with multiple database types
Database connection configuration with multiple database types

Key Features Walkthrough

1. Schema-Aware Querying

Before writing queries, the agent inspects the database schema --- tables, columns, types, relationships, indexes. This means the agent writes correct queries the first time rather than guessing at table and column names.

User: How many orders did we get last month by product category?

Agent: Let me check the schema first.
[Tool call: sql_schema_inspect]
  connection: main_db
  tables: ["orders", "products", "categories"]

Agent: I can see the relationships. Here's the query:
[Tool call: sql_query]
  SELECT c.name AS category, COUNT(o.id) AS order_count
  FROM orders o
  JOIN products p ON o.product_id = p.id
  JOIN categories c ON p.category_id = c.id
  WHERE o.created_at >= '2026-03-01' AND o.created_at < '2026-04-01'
  GROUP BY c.name
  ORDER BY order_count DESC

2. Window Functions and CTEs

The toolkit handles advanced SQL constructs that many AI tools struggle with:

-- CTE with window function: ranking customers by monthly spend
WITH monthly_spend AS (
  SELECT
    customer_id,
    DATE_TRUNC('month', order_date) AS month,
    SUM(total) AS spend
  FROM orders
  GROUP BY customer_id, DATE_TRUNC('month', order_date)
)
SELECT
  customer_id,
  month,
  spend,
  RANK() OVER (PARTITION BY month ORDER BY spend DESC) AS rank,
  LAG(spend) OVER (PARTITION BY customer_id ORDER BY month) AS prev_month_spend
FROM monthly_spend
WHERE month >= '2026-01-01'
ORDER BY month, rank

The agent generates these naturally when the question demands it, rather than defaulting to simpler (and often incorrect) approaches.

3. EXPLAIN Analysis

When a query runs slowly or you want to understand execution plans:

User: Why is the orders report taking so long?

Agent: Let me analyze the query plan.
[Tool call: sql_explain]
  query: "SELECT ... (the slow query)"
  mode: "analyze"

Agent: The issue is a sequential scan on the orders table. 
There's no index on the created_at column, which forces 
a full table scan of 2.3M rows. I recommend:

CREATE INDEX idx_orders_created_at ON orders (created_at);

This should reduce the query time from ~4 seconds to ~50ms.

4. Migration Script Generation

The agent can generate database migration scripts:

User: I need to add a status field to the orders table with values 
pending, processing, shipped, delivered.

Agent: Here's the migration:
[Tool call: sql_generate_migration]

-- Migration: add_status_to_orders
-- Up
ALTER TABLE orders ADD COLUMN status VARCHAR(20) 
  DEFAULT 'pending' 
  CHECK (status IN ('pending', 'processing', 'shipped', 'delivered'));

CREATE INDEX idx_orders_status ON orders (status);

-- Down
DROP INDEX idx_orders_status;
ALTER TABLE orders DROP COLUMN status;

5. Cross-Database Queries

With multiple connections configured, the agent can pull data from different databases and combine results in its response. Query PostgreSQL for order data, MySQL for analytics, and SQLite for local configuration --- all in one conversation.

EXPLAIN analysis output showing query plan visualization
EXPLAIN analysis output showing query plan visualization

Real-World Use Cases

Business Intelligence

A product manager asks natural language questions about their data: "What's our customer retention rate by cohort?" The agent writes the appropriate cohort analysis query, executes it, and presents the results in a readable format. No SQL knowledge required from the user.

Database Administration

A DBA uses the agent to audit database health --- checking for missing indexes, unused tables, slow queries, and storage utilization. The agent inspects schemas, runs diagnostic queries, and generates a health report.

Data Migration

During a system migration, the agent helps transfer data between databases. It inspects the source schema, generates compatible INSERT statements for the target, handles type conversions, and validates row counts after migration.

Incident Response

When an application issue occurs, the agent quickly queries production databases to identify the scope of impact. "How many users are affected by the billing calculation bug?" gets answered in seconds rather than minutes of manual query writing.

Report Automation

Combined with communication skills, the agent can generate regular reports and distribute them. Query the database for weekly metrics, format the results, and send them via Inbounter to stakeholders who need the data in their inbox every Monday morning.

Business intelligence query results formatted as a dashboard
Business intelligence query results formatted as a dashboard

Pros and Cons

Pros

Cons

Verdict and Rating

Rating: 4 / 5

The OpenClaw SQL Toolkit is one of the most practically useful skills in the ecosystem. The combination of schema awareness, advanced SQL support, safety controls, and EXPLAIN analysis makes it suitable for real production use rather than just toy demos.

The rating reflects two realities: the skill is excellent at what it does, but giving an AI agent database access is an inherently high-stakes decision. The safety features are well-designed but cannot eliminate all risk. Use read-only mode with a dedicated database user whenever possible, and reserve write access for carefully controlled scenarios.

For workflows that involve querying data and then communicating results, the SQL Toolkit pairs naturally with Inbounter for emailing reports and the Slack Integration for posting results in team channels.

Alternatives

Rating card with final score
Rating card with final score

FAQ

Q: Can the agent access cloud-hosted databases (RDS, Cloud SQL, Azure)? A: Yes. The connection configuration supports any PostgreSQL or MySQL instance accessible over the network. Ensure the agent's host has network access to the database (security groups, VPC peering, etc.) and use SSL.

Q: How does the toolkit handle sensitive data in query results? A: The blocked_tables setting prevents querying tables with sensitive data. For column-level control, create a database view that excludes sensitive columns and point the agent at the view instead of the base table.

Q: Can I use this with database replicas for read-only access? A: Yes, and this is the recommended approach. Point the agent at a read replica to eliminate any risk of accidental write operations while still getting real-time data.

Q: Does the toolkit support NoSQL databases like MongoDB? A: No. The SQL Toolkit is specifically designed for relational databases (SQLite, PostgreSQL, MySQL). For NoSQL, look for dedicated MongoDB or DynamoDB skills on ClawHub.

Q: Can I automate regular database reports and send them via email? A: Yes. Configure your agent to run specific queries on a schedule, format the results, and send them via Inbounter's email API. This is a common pattern for weekly metrics reports, daily health checks, or real-time alerting when query results cross thresholds.


Continue exploring: Frontend Design Skill, Coding Agent Skill, and Capability Evolver.

SuperBuilder

Build faster with SuperBuilder

Run parallel Claude Code agents with built-in cost tracking, task queuing, and worktree isolation. Free and open source.

Download for Mac