Most developers write SQL but few can read what the database actually does with it. The EXPLAIN plan is the database's query execution plan — a detailed breakdown of how it will (or did) execute your SQL. It's the first tool you reach for when a query is slow, and it tells you more than any guessing will.
This guide covers both MySQL/MariaDB and PostgreSQL EXPLAIN plans, since the syntax and output differ significantly.
What EXPLAIN Actually Shows You
When you run EXPLAIN SELECT ..., you're asking the database's query optimizer: "What's your plan for executing this query?" The optimizer has already analyzed your query, the available indexes, table statistics, and data distribution — and it's telling you what it decided to do.
Key questions a plan answers:
- Is it using an index? Which one?
- How many rows does it expect to scan?
- What join algorithm will it use?
- Where will it sort or materialize temporary data?
MySQL / MariaDB EXPLAIN
Basic EXPLAIN
EXPLAIN SELECT o.id, o.total, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
AND o.created_at > '2025-01-01'
ORDER BY o.created_at DESC
LIMIT 50;
Typical output:
+----+-------------+-------+--------+-----------------------------+---------+---------+-----------------+------+-------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+-----------------------------+---------+---------+-----------------+------+-------------------+
| 1 | SIMPLE | o | range | idx_status,idx_created_at | idx_status | 302 | NULL | 2847 | Using where; Using filesort |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | o.user_id | 1 | NULL |
+----+-------------+-------+--------+-----------------------------+---------+---------+-----------------+------+-------------------+
Understanding Each Column
type — Most important column. The access method used:
type values from best to worst:
system → Single row (constant table)
const → Single row by primary key or unique index
eq_ref → One row from this table per row of previous (PK or unique join)
ref → Multiple rows matching index lookup
range → Index scan over a range (BETWEEN, >, <, IN)
index → Full index scan (scans entire index)
ALL → Full table scan — usually a problem
ALL (full table scan) on a large table is almost always worth investigating. eq_ref and const are excellent — they mean the optimizer found exactly the rows it needs.
key — Which index is actually being used. NULL means no index.
rows — Estimated number of rows examined. This is an estimate, not a guarantee. High row counts with few results returned indicate low selectivity.
Extra — Additional information. Watch for:
Using filesort → Must sort in memory or on disk (expensive)
Using temporary → Creates a temporary table (expensive, especially for large datasets)
Using where → Filtering applied after index scan (normal and expected)
Using index → Index covers all needed data (very efficient)
Impossible WHERE → Query can never return rows (check your conditions)
Using EXPLAIN ANALYZE
EXPLAIN shows the plan. EXPLAIN ANALYZE actually runs the query and compares estimated vs actual:
-- MariaDB 10.9+
EXPLAIN ANALYZE SELECT * FROM orders WHERE status = 'pending';
-- MySQL doesn't have ANALYZE, but has FORMAT=JSON with estimates
EXPLAIN FORMAT=JSON SELECT * FROM orders WHERE status = 'pending';
The JSON format in MySQL provides cost estimates:
{
"query_block": {
"cost_info": {
"query_cost": "1847.25"
},
"table": {
"table_name": "orders",
"access_type": "range",
"key": "idx_status_created",
"used_key_parts": ["status", "created_at"],
"rows_examined_per_scan": 2847,
"rows_produced_per_join": 2847,
"filtered": "100.00",
"cost_info": {
"read_cost": "284.70",
"eval_cost": "284.70",
"prefix_cost": "1847.25"
}
}
}
}
PostgreSQL EXPLAIN
PostgreSQL's EXPLAIN is more detailed and uses a tree structure:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.id, o.total, u.email
FROM orders o
JOIN users u ON u.id = o.user_id
WHERE o.status = 'pending'
AND o.created_at > '2025-01-01'
ORDER BY o.created_at DESC
LIMIT 50;
Output:
Limit (cost=1247.83..1247.96 rows=50 width=52) (actual time=18.432..18.441 rows=50 loops=1)
-> Sort (cost=1247.83..1254.54 rows=2684 width=52) (actual time=18.430..18.434 rows=50 loops=1)
Sort Key: o.created_at DESC
Sort Method: top-N heapsort Memory: 34kB
-> Hash Join (cost=89.14..1189.08 rows=2684 width=52) (actual time=2.341..15.847 rows=2706 loops=1)
Hash Cond: (o.user_id = u.id)
Buffers: shared hit=743 read=124
-> Bitmap Heap Scan on orders o (cost=57.30..1117.66 rows=2684 width=32)
(actual time=1.108..12.462 rows=2706 loops=1)
Recheck Cond: ((status = 'pending') AND (created_at > '2025-01-01'))
Heap Blocks: exact=867
Buffers: shared hit=624 read=124
-> Bitmap Index Scan on idx_orders_status_created
(cost=0.00..56.63 rows=2684 width=0)
(actual time=0.892..0.892 rows=2706 loops=1)
Index Cond: ((status = 'pending') AND (created_at > '2025-01-01'))
-> Hash (cost=22.40..22.40 rows=740 width=24) (actual time=1.198..1.198 rows=740 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 47kB
Buffers: shared hit=119
-> Seq Scan on users u (cost=0.00..22.40 rows=740 width=24)
(actual time=0.012..0.682 rows=740 loops=1)
Planning Time: 0.847 ms
Execution Time: 18.612 ms
Reading the PostgreSQL Plan Tree
Read from the inside out (most indented first):
- Bitmap Index Scan on
idx_orders_status_created— uses our composite index, fast - Bitmap Heap Scan — fetches actual rows from the table using the bitmap
- Seq Scan on users — small table (740 rows), full scan is fine
- Hash Join — builds a hash table from users, probes with orders rows
- Sort — sorts results for ORDER BY
- Limit — takes first 50
cost=X..Y means estimated startup cost (X) and total cost (Y). Higher is worse.
actual time=X..Y means real startup time and completion time in milliseconds. The gap between estimated and actual rows is important:
Bitmap Heap Scan estimated rows=2684, actual rows=2706 → Good estimate
Seq Scan estimated rows=740, actual rows=740 → Perfect
If you see:
estimated rows=5, actual rows=50000 → Bad statistics, needs ANALYZE
Buffers: shared hit=743 read=124 means 743 buffer cache hits and 124 physical disk reads. High read counts for repeatedly-accessed tables suggest the table should fit in shared_buffers.
Identifying Problem Patterns
Sequential scan on large table:
-- This shows the problem
EXPLAIN SELECT * FROM events WHERE user_id = 42;
-- Output: Seq Scan on events (cost=0.00..458392.00 rows=247 width=84)
-- Filter: (user_id = 42)
-- Rows Removed by Filter: 5000000
-- Fix: add an index
CREATE INDEX CONCURRENTLY idx_events_user_id ON events (user_id);
Sort without index:
EXPLAIN SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;
-- Without index: Sort (actual time=4892.111..4892.123 rows=10 loops=1)
-- Sort Method: external merge Disk: 47264kB ← sorting on disk!
-- Fix:
CREATE INDEX CONCURRENTLY idx_orders_created_at ON orders (created_at DESC);
-- With index: Index Scan Backward on idx_orders_created_at
-- (actual time=0.041..0.062 rows=10 loops=1) ← instant
Hash join with large hash table:
-- Problematic join when hash table doesn't fit in memory
-- Sort Method: external merge Disk: 182MB
-- Increase work_mem for the session
SET work_mem = '256MB';
EXPLAIN ANALYZE SELECT ...; -- Re-run to see if in-memory now
Composite Index Strategy
Understanding EXPLAIN plans helps you design better indexes:
-- Query pattern:
SELECT * FROM orders
WHERE status = 'pending' -- equality condition
AND created_at > '2025-01-01' -- range condition
ORDER BY created_at DESC
LIMIT 50;
-- Index design: equality columns first, then range column
CREATE INDEX idx_orders_status_created
ON orders (status, created_at DESC);
-- Why this order?
-- 1. status = 'pending' narrows to a small subset (high cardinality filter)
-- 2. created_at is already sorted in the index for ORDER BY
-- 3. LIMIT can stop as soon as 50 rows are found
-- Result: no sort needed, no full table scan
The rule of thumb: equality conditions first, then range conditions, then ORDER BY columns — all in the same index.
Index Selectivity and Cardinality
-- Check index selectivity (fraction of unique values)
SELECT
attname,
n_distinct,
correlation
FROM pg_stats
WHERE tablename = 'orders'
AND attname IN ('status', 'user_id', 'created_at');
-- n_distinct > 0: estimated distinct value count
-- n_distinct < 0: fraction of distinct values (-1 = all unique)
-- correlation: 1.0 = physically stored in order, 0 = random
Low cardinality columns (like status with 5 possible values) are poor standalone index candidates for large tables. An index on status where 40% of rows have status='pending' means the optimizer will often skip the index and do a full table scan — correctly, because fetching 40% of a large table via index lookups is slower than a sequential scan.
Using EXPLAIN for Index Verification
-- Check if your new index is being used
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM orders
WHERE user_id = 42 AND status = 'pending'
ORDER BY created_at DESC;
-- Before index: Seq Scan, actual time=245ms
-- After index: Index Scan, actual time=0.4ms
-- The optimizer might still choose a different index
-- Force a specific index to test (MySQL):
SELECT * FROM orders USE INDEX (idx_user_status_created)
WHERE user_id = 42 AND status = 'pending';
-- PostgreSQL: disable other scan types to force index
SET enable_seqscan = OFF;
EXPLAIN SELECT ...; -- Now must use index
SET enable_seqscan = ON; -- Reset
EXPLAIN for UPDATE and DELETE
Don't forget that writes also execute plans:
-- Check the plan for a bulk update
EXPLAIN UPDATE orders
SET status = 'expired'
WHERE status = 'pending'
AND created_at < NOW() - INTERVAL '30 days';
-- If this scans 5M rows to find 1000, add the index:
-- CREATE INDEX ON orders (status, created_at);
-- Same composite index design as SELECT queries
Once you can read EXPLAIN output, you stop guessing about query performance. The plan tells you exactly what the database is doing and where the cost is. Combined with query timing and index analysis, it's all you need to diagnose and fix almost any slow query.
Building something that needs to scale? We help teams architect systems that grow with their business. scopeforged.com