Database queries are the most common source of performance problems in web applications. A single slow query can make an entire page feel sluggish. A thousand unnecessary queries can bring a server to its knees. Here's how to identify and fix database performance issues.
Identifying Performance Bottlenecks
Before optimizing, you need to know what's slow. Don't guess;measure.
Enable Query Logging
In development, log all queries to see what your application actually executes:
// In AppServiceProvider boot method (development only)
if (app()->environment('local')) {
DB::listen(function ($query) {
Log::info($query->sql, [
'bindings' => $query->bindings,
'time' => $query->time,
]);
});
}
Laravel Debugbar is even better;it shows queries, timing, and memory usage in a toolbar.
Slow Query Logs
Configure your database to log slow queries. In MySQL:
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 1
This logs any query taking more than 1 second. Review this log regularly in production.
Monitoring Tools
For production, use monitoring tools that track query performance over time. Services like Scout APM, New Relic, or Laravel Telescope show you which queries are slow and how often they run.
Query Optimization Fundamentals
Understanding EXPLAIN
Before optimizing a query, understand what the database is doing. Use EXPLAIN:
EXPLAIN SELECT * FROM projects WHERE client_id = 123 AND status = 'active';
Key things to look for:
- type: "ALL" means a full table scan (bad for large tables)
- rows: Estimated rows examined (lower is better)
- key: Which index is being used (NULL means none)
- Extra: "Using filesort" or "Using temporary" indicate potential issues
Index Strategies
Indexes are the most important optimization tool. They let the database find rows without scanning the entire table.
Create indexes on columns used in:
- WHERE clauses
- JOIN conditions
- ORDER BY clauses
// In a migration
Schema::table('projects', function (Blueprint $table) {
$table->index('client_id');
$table->index(['status', 'created_at']); // Composite index
});
Composite indexes matter for multi-column queries. An index on (status, created_at) helps queries that filter by status or by both status and created_at, but not queries that only filter by created_at.
When Indexes Hurt
Indexes aren't free. They:
- Take up disk space
- Slow down INSERT, UPDATE, and DELETE operations
- Need to be maintained
Don't index everything. Index columns that are frequently queried and have high selectivity (many distinct values).
Avoiding N+1 Queries
The N+1 problem is the most common performance issue in ORM-based applications.
// Bad: N+1 queries (1 for projects, N for clients)
$projects = Project::all();
foreach ($projects as $project) {
echo $project->client->name; // Query executed for each project
}
// Good: 2 queries total
$projects = Project::with('client')->get();
foreach ($projects as $project) {
echo $project->client->name; // No additional query
}
Laravel can detect N+1 queries in development:
// In AppServiceProvider
Model::preventLazyLoading(!app()->isProduction());
This throws an exception when lazy loading occurs, forcing you to add eager loading.
Nested Eager Loading
Load multiple levels of relationships:
// Load projects with their clients and client's users
$projects = Project::with('client.users')->get();
// Load multiple relationships
$projects = Project::with(['client', 'tasks', 'tasks.assignee'])->get();
Constrained Eager Loading
Load only related records you need:
$projects = Project::with(['tasks' => function ($query) {
$query->where('status', 'pending')
->orderBy('due_date');
}])->get();
Eloquent-Specific Optimizations
Select Only What You Need
Don't load columns you won't use:
// Bad: Loads all columns
$projects = Project::all();
// Better: Only loads id and name
$projects = Project::select('id', 'name')->get();
This matters especially with TEXT or BLOB columns that contain large data.
Use Chunking for Large Datasets
Don't load millions of records into memory:
// Bad: Loads all records at once
Project::all()->each(function ($project) {
// Process
});
// Good: Processes in chunks of 1000
Project::chunk(1000, function ($projects) {
foreach ($projects as $project) {
// Process
}
});
For even better memory usage with very large datasets:
// Lazy loading with cursor
foreach (Project::cursor() as $project) {
// Process one at a time
}
Query Scopes for Reusability
Encapsulate common query logic in scopes:
// In Project model
public function scopeActive($query)
{
return $query->where('status', 'active');
}
public function scopeOverdue($query)
{
return $query->where('due_date', '<', now())
->where('status', '!=', 'completed');
}
// Usage
$projects = Project::active()->overdue()->get();
Caching Strategies
Query Caching
Cache expensive queries:
$stats = Cache::remember('dashboard:stats', 3600, function () {
return [
'projects' => Project::count(),
'revenue' => Invoice::paid()->sum('amount'),
'tasks' => Task::pending()->count(),
];
});
Invalidate caches when underlying data changes:
// In ProjectObserver
public function created(Project $project)
{
Cache::forget('dashboard:stats');
}
Model Caching
Cache frequently accessed models:
public function client()
{
return Cache::remember("client:{$this->client_id}", 3600, function () {
return $this->getRelation('client') ?? Client::find($this->client_id);
});
}
Redis for High-Performance Caching
Redis outperforms file or database caching for high-traffic applications. Configure in .env:
CACHE_DRIVER=redis
Redis also supports cache tags for granular invalidation:
Cache::tags(['projects', 'client:123'])->put('key', $value, 3600);
Cache::tags(['client:123'])->flush(); // Clears all caches for this client
Database Architecture Considerations
Read Replicas
For read-heavy applications, direct read queries to replicas:
// config/database.php
'mysql' => [
'read' => [
'host' => ['replica1.example.com', 'replica2.example.com'],
],
'write' => [
'host' => 'primary.example.com',
],
// ... other config
],
Laravel automatically uses read connections for SELECT queries.
Table Partitioning
For very large tables, partitioning can improve query performance by limiting which data the database needs to scan.
Consider partitioning when:
- Tables have millions of rows
- Queries consistently filter by a specific column (like date)
- You regularly delete old data
Denormalization
Sometimes, normalized data isn't the fastest to query. Strategic denormalization trades write complexity for read performance.
Instead of joining to count tasks:
// Add tasks_count column to projects table
$project->increment('tasks_count'); // When adding task
$project->decrement('tasks_count'); // When removing task
Laravel's withCount can do this at query time, but a denormalized column is faster when counts are needed frequently.
Monitoring and Profiling Tools
Laravel Telescope
Telescope shows queries, exceptions, and more in a dashboard. Essential for development and useful in production (with appropriate authentication).
Database-Specific Tools
MySQL: Use SHOW PROCESSLIST to see running queries. Use performance_schema for detailed analysis.
PostgreSQL: Use pg_stat_statements to track query statistics over time.
APM Tools
Application Performance Monitoring tools like Scout, New Relic, or Datadog provide production query monitoring with historical trends and alerting.
Conclusion
Database optimization is iterative. Start by measuring to find the actual slow spots;they're often not where you'd expect. Apply the appropriate optimization: usually indexing, eager loading, or caching.
Resist premature optimization. Not every query needs to be optimized. Focus on queries that run frequently or take significant time. A query that runs once a day and takes 2 seconds might be fine. A query that runs on every page load and takes 100ms is worth optimizing.
The goal isn't the fastest possible queries. It's queries fast enough that users don't notice them. Most pages should load in under 200ms of database time. Achieve that, and your database is probably fast enough.