Caching Strategies for Web Applications

Reverend Philip Nov 26, 2025 10 min read

Master caching at every layer. From browser caching to Redis, learn strategies that dramatically improve application performance.

Caching is one of the most effective techniques for improving application performance. By storing computed results or frequently accessed data closer to where it's needed, you can dramatically reduce response times and database load. This guide covers caching strategies at every layer of your application.

The Caching Hierarchy

Think of caching as layers, from closest to the user to closest to the data source:

  1. Browser cache - On user's device
  2. CDN cache - Edge servers worldwide
  3. Application cache - In-memory (Redis, Memcached)
  4. Database cache - Query results, buffer pools
  5. Opcode cache - Compiled PHP/Python

Each layer serves different purposes and has different trade-offs.

Browser Caching

The fastest request is one that never leaves the browser.

Cache-Control Headers

The Cache-Control header is your primary tool for instructing browsers how to cache responses. You can specify whether content is public (cacheable by CDNs) or private (user-specific), and how long it should be stored.

// Laravel - Cache for 1 hour
return response($content)
    ->header('Cache-Control', 'public, max-age=3600');

// Private content (don't cache in CDN)
return response($content)
    ->header('Cache-Control', 'private, max-age=3600');

// Never cache
return response($content)
    ->header('Cache-Control', 'no-store');

Note the distinction between public and private - use private for user-specific data like dashboards or account pages to prevent CDNs from serving one user's data to another.

ETag for Conditional Requests

ETags allow browsers to validate whether their cached copy is still fresh without downloading the entire resource. The browser sends the ETag back on subsequent requests, and your server can respond with a lightweight 304 status if nothing has changed.

$etag = md5($content);
$response = response($content)
    ->header('ETag', $etag);

// On subsequent requests, browser sends If-None-Match
if (request()->header('If-None-Match') === $etag) {
    return response('', 304); // Not Modified
}

This pattern significantly reduces bandwidth for resources that change infrequently but need freshness validation.

Service Workers for Offline Caching

Service workers give you programmatic control over caching, enabling offline functionality and sophisticated caching strategies. They intercept network requests and can serve cached content when the network is unavailable.

// service-worker.js
const CACHE_NAME = 'v1';
const URLS_TO_CACHE = ['/css/app.css', '/js/app.js', '/offline.html'];

self.addEventListener('install', (event) => {
    event.waitUntil(
        caches.open(CACHE_NAME)
            .then(cache => cache.addAll(URLS_TO_CACHE))
    );
});

self.addEventListener('fetch', (event) => {
    event.respondWith(
        caches.match(event.request)
            .then(response => response || fetch(event.request))
    );
});

The install event pre-caches essential assets, while the fetch event implements a cache-first strategy. You'll want to version your cache name (like v1) so you can invalidate old caches when deploying updates.

CDN Caching

CDNs cache content at edge locations worldwide, reducing latency for global users.

What to Cache at the CDN

  • Static assets (CSS, JS, images, fonts)
  • Public pages that change infrequently
  • API responses for public data

CDN Cache Headers

CDNs respect standard cache headers, but many also support CDN-specific headers that let you set different TTLs for the CDN versus the browser. The stale-while-revalidate directive is particularly useful - it allows the CDN to serve stale content while fetching a fresh copy in the background.

// Cache publicly for 1 day, allow CDN to serve stale while revalidating
return response($content)->headers([
    'Cache-Control' => 'public, max-age=86400, stale-while-revalidate=3600',
    'CDN-Cache-Control' => 'max-age=604800', // CDN-specific: 1 week
]);

Cache Invalidation

When content changes, you need to purge it from the CDN. Most CDN providers offer APIs for purging specific URLs or groups of content tagged during caching.

// Purge specific URLs (CloudFlare example)
$client->purgeCache([
    'files' => ['https://example.com/css/app.css']
]);

// Purge by tag (if supported)
$client->purgeCache(['tags' => ['products']]);

Purging by tag is more maintainable for large sites - tag your product pages with products and you can invalidate them all with a single API call.

Cache Busting for Assets

For static assets, cache busting through filename versioning is often more reliable than purging. When the file content changes, the filename changes, so browsers and CDNs fetch the new version automatically.

<!-- Version in filename -->
<link href="/css/app.v1234.css" rel="stylesheet">

<!-- Query string (less reliable) -->
<link href="/css/app.css?v=1234" rel="stylesheet">

<!-- Laravel Mix versioning -->
<link href="{{ mix('css/app.css') }}" rel="stylesheet">

Query string versioning works but some CDNs ignore query strings by default. Filename versioning (handled automatically by build tools like Mix or Vite) is the most reliable approach.

Application-Level Caching

This is where Redis or Memcached shine, storing computed data in memory.

Basic Cache Usage (Laravel)

Laravel provides a clean, unified API for caching regardless of your backend. The remember method is particularly useful - it handles the common pattern of checking the cache, computing on miss, and storing the result.

// Store value
Cache::put('user:123:profile', $profile, 3600);

// Retrieve value
$profile = Cache::get('user:123:profile');

// Remember pattern (get from cache or compute)
$profile = Cache::remember('user:123:profile', 3600, function () {
    return $this->computeExpensiveProfile(123);
});

// Forever cache (until manually invalidated)
Cache::forever('settings:global', $settings);

// Invalidate
Cache::forget('user:123:profile');

Be thoughtful about TTLs - longer TTLs mean better performance but potentially staler data. Match the TTL to how frequently the underlying data changes.

Cache Tags for Group Invalidation

Tags let you group related cache entries so you can invalidate them together. This is invaluable when a change affects multiple cached items - like updating a product that appears on several listing pages.

// Cache with tags
Cache::tags(['products', 'featured'])->put('featured-products', $products, 3600);

// Invalidate all caches with tag
Cache::tags(['products'])->flush();

Note that cache tags require a backend that supports them (Redis or Memcached, not the file or database drivers).

Distributed Locking

Prevent multiple processes from rebuilding the same cache:

When multiple requests hit an empty cache simultaneously, they might all try to rebuild it at once - a phenomenon called cache stampede. Distributed locking ensures only one process rebuilds while others wait.

// Atomic cache get/set with lock
$value = Cache::lock('rebuild-report')->block(10, function () {
    return Cache::remember('expensive-report', 3600, function () {
        return $this->generateExpensiveReport();
    });
});

The block(10) call waits up to 10 seconds for the lock. This is essential for expensive computations that multiple users might request simultaneously.

Common Caching Patterns

Cache-Aside (Lazy Loading):

The cache-aside pattern is the most common approach. Your application checks the cache first, falls back to the database on miss, and populates the cache for future requests.

public function getProduct($id)
{
    $key = "product:{$id}";

    if ($cached = Cache::get($key)) {
        return $cached;
    }

    $product = Product::find($id);
    Cache::put($key, $product, 3600);

    return $product;
}

This pattern works well when reads vastly outnumber writes, which is the case for most web applications.

Write-Through:

With write-through caching, you update the cache whenever you update the database. This ensures the cache is always fresh but adds write latency.

public function updateProduct($id, $data)
{
    $product = Product::findOrFail($id);
    $product->update($data);

    // Update cache immediately after database
    Cache::put("product:{$id}", $product, 3600);

    return $product;
}

Write-Behind (Async):

Write-behind caching prioritizes write speed by updating the cache immediately and queueing the database write for later. Use this carefully - you risk data loss if the queue fails before the database is updated.

public function updateProduct($id, $data)
{
    // Update cache immediately
    Cache::put("product:{$id}", $data, 3600);

    // Queue database write
    UpdateProductJob::dispatch($id, $data);
}

This pattern is suitable for non-critical data like analytics counters or user preferences where occasional loss is acceptable.

Database Query Caching

Query Result Caching

Caching query results is straightforward - wrap your query in Cache::remember and choose an appropriate key and TTL.

// Cache query results
$users = Cache::remember('active-users', 3600, function () {
    return User::where('active', true)->get();
});

Avoid Caching Queries Directly

Instead of caching SQL, cache the final result:

Don't use the SQL query itself as a cache key - it couples your cache to implementation details and makes invalidation difficult. Instead, use semantic keys that describe the business concept.

// Less ideal - coupling to query structure
$key = md5($sql);

// Better - cache by business concept
$key = "user:{$id}:dashboard_stats";

Database-Level Caching

MySQL and PostgreSQL have their own caching:

Modern databases handle their own query result caching through buffer pools. Generally, you should let the database manage this layer and focus your caching efforts at the application level where you have more control.

-- MySQL query cache (deprecated in 8.0)
-- Use InnoDB buffer pool instead

-- PostgreSQL shared_buffers
-- Configured in postgresql.conf

Redis Data Structures for Caching

Redis offers more than key-value storage:

Hashes for Objects

Redis hashes let you store and retrieve individual fields of an object without serializing the entire thing. This is efficient when you frequently need just a subset of an object's data.

// Store object fields
Redis::hset('user:123', 'name', 'John');
Redis::hset('user:123', 'email', 'john@example.com');

// Get all fields
$user = Redis::hgetall('user:123');

// Get single field
$name = Redis::hget('user:123', 'name');

Hashes also support atomic field updates, so you can increment a counter or modify one field without touching others.

Sorted Sets for Leaderboards

Sorted sets maintain elements ordered by score, making them perfect for leaderboards, ranking systems, or any data you need to retrieve by rank.

// Add scores
Redis::zadd('leaderboard', 100, 'user:1');
Redis::zadd('leaderboard', 200, 'user:2');

// Get top 10
$top = Redis::zrevrange('leaderboard', 0, 9, ['WITHSCORES' => true]);

The beauty of sorted sets is that ranking operations are O(log N), making them efficient even with millions of entries.

Lists for Queues

Redis lists support push and pop operations from both ends, making them suitable for simple queue implementations.

// Push to queue
Redis::lpush('job-queue', json_encode($job));

// Pop from queue
$job = Redis::rpop('job-queue');

For production job queues, you'll want Laravel's queue system which handles reliability concerns like job retries and failure tracking.

Cache Invalidation Strategies

Cache invalidation is famously difficult. Here are approaches:

Time-Based Expiration

Simplest approach, works for data where staleness is acceptable:

Time-based expiration is the simplest invalidation strategy. You accept that data might be stale for up to N seconds and let caches expire naturally.

Cache::put('stats', $stats, 300); // 5 minutes

This works well for dashboards, analytics, and any data where perfect freshness isn't required.

Event-Based Invalidation

Clear cache when related data changes:

For data that must be immediately consistent, invalidate caches when the underlying data changes. Laravel model events are perfect for this - they fire automatically on save, delete, and other operations.

// In Product model
protected static function booted()
{
    static::saved(function ($product) {
        Cache::forget("product:{$product->id}");
        Cache::tags(['products'])->flush();
    });
}

Be careful with tag flushing in high-write scenarios - you might invalidate more caches than necessary, reducing your hit ratio.

Version-Based Keys

Include version in cache key:

Version-based keys offer a middle ground - instead of deleting cache entries, you change the key so old entries naturally expire. This is useful when you can't easily enumerate all cache keys that need invalidation.

$version = Cache::get('products:version', 1);
$key = "products:list:v{$version}";

// Invalidate by incrementing version
Cache::increment('products:version');

Old cache entries remain until they expire or are evicted, so this approach trades memory for simplicity.

Measuring Cache Effectiveness

Cache Hit Ratio

Tracking your cache hit ratio tells you whether your caching strategy is working. A low hit ratio might indicate keys are expiring too quickly, or you're caching data that's rarely requested.

$hits = Redis::info('stats')['keyspace_hits'];
$misses = Redis::info('stats')['keyspace_misses'];
$hitRatio = $hits / ($hits + $misses);

Memory Usage

Monitoring memory usage helps you right-size your cache infrastructure and catch runaway growth before it becomes a problem.

$info = Redis::info('memory');
$usedMemory = $info['used_memory_human'];

Target: Hit ratio above 90% for frequently accessed data.

Common Pitfalls

  1. Cache stampede: Many requests rebuild same cache simultaneously. Use locks.

  2. Stale data: Always have invalidation strategy. Know your staleness tolerance.

  3. Memory pressure: Monitor memory usage. Set appropriate eviction policies.

  4. Over-caching: Don't cache data that's cheap to compute or rarely accessed.

  5. Cache key collisions: Use namespaced, descriptive keys.

Conclusion

Effective caching requires understanding your data access patterns and staleness tolerance. Start with browser and CDN caching for static content, add Redis for frequently accessed dynamic data, and always have a clear invalidation strategy. Monitor your hit ratios and adjust TTLs based on actual usage patterns.

Share this article

Related Articles

Distributed Locking Patterns

Coordinate access to shared resources across services. Implement distributed locks with Redis, ZooKeeper, and databases.

Jan 16, 2026

API Design First Development

Design APIs before implementing them. Use OpenAPI specifications, mock servers, and contract-first workflows.

Jan 15, 2026

Need help with your project?

Let's discuss how we can help you build reliable software.