Efficient memory management prevents out-of-memory errors, reduces garbage collection pauses, and improves application performance. Understanding memory patterns helps you write resource-efficient code.
Memory Fundamentals
Memory Layout
Understanding how your application uses memory is the first step toward optimizing it. Most applications divide memory into three main areas, each with different characteristics and use cases.
Application Memory:
├── Stack (function calls, local variables)
│ ├── Fast allocation/deallocation
│ ├── Fixed size per thread
│ └── LIFO (Last In, First Out)
│
├── Heap (dynamic allocation)
│ ├── Objects, arrays, closures
│ ├── Managed by garbage collector
│ └── Fragmentation possible
│
└── Static/Global (constants, static vars)
├── Loaded at startup
└── Lives for program lifetime
Stack memory is extremely fast because it just moves a pointer, but you can't control when data is deallocated; it happens automatically when functions return. Heap memory is more flexible but requires garbage collection in managed languages.
Common Memory Issues
Before diving into solutions, it helps to understand the problems you might encounter. Each of these issues has distinct symptoms and solutions.
Memory Leak:
- Objects referenced but no longer needed
- Unbounded caches
- Event listeners not removed
- Circular references (in some GC implementations)
Memory Bloat:
- Excessive object creation
- Large strings/arrays
- Uncompressed data in memory
- Loading entire datasets
Memory Fragmentation:
- Many small allocations/deallocations
- Objects of varying sizes
- Long-running processes
Leaks grow memory usage over time. Bloat causes immediate high memory usage. Fragmentation makes efficient allocation difficult even when total memory seems sufficient.
PHP Memory Management
Memory Limits and Monitoring
PHP has a configurable memory limit that prevents runaway scripts from consuming all server memory. Understanding how to monitor and work within these limits is essential for long-running processes.
This class provides a simple way to track memory usage at different points in your code. You'll find it invaluable for debugging memory issues.
// Check current memory usage
$used = memory_get_usage(true); // Includes allocated but unused
$peak = memory_get_peak_usage(true);
echo "Current: " . formatBytes($used) . "\n";
echo "Peak: " . formatBytes($peak) . "\n";
// Memory limit
ini_set('memory_limit', '256M');
$limit = ini_get('memory_limit');
// Memory tracking in long-running processes
class MemoryMonitor
{
public function check(string $checkpoint): void
{
$usage = memory_get_usage(true);
$limit = $this->getMemoryLimit();
$percentage = ($usage / $limit) * 100;
if ($percentage > 80) {
Log::warning("High memory usage at {$checkpoint}", [
'usage' => formatBytes($usage),
'limit' => formatBytes($limit),
'percentage' => round($percentage, 2),
]);
}
}
private function getMemoryLimit(): int
{
$limit = ini_get('memory_limit');
if ($limit === '-1') {
return PHP_INT_MAX;
}
return $this->parseBytes($limit);
}
}
Monitoring memory at checkpoints helps you identify which operations consume the most memory and where leaks might be occurring. The 80% threshold gives you warning before hitting the limit.
Generators for Large Datasets
When processing large datasets, loading everything into memory at once can quickly exhaust available RAM. Generators allow you to process one item at a time, keeping memory usage constant regardless of dataset size.
Compare these two approaches: the first loads a million rows into memory, while the second processes one row at a time.
// Bad: Loads entire result set into memory
function getAllUsers(): array
{
return User::all()->toArray(); // 1 million rows = massive memory
}
// Good: Generator yields one row at a time
function getAllUsers(): Generator
{
foreach (User::cursor() as $user) {
yield $user;
}
}
// Usage
foreach (getAllUsers() as $user) {
processUser($user); // Only one user in memory at a time
}
// Chunking for batch operations
User::chunk(1000, function ($users) {
foreach ($users as $user) {
processUser($user);
}
// Memory freed after each chunk
});
// Lazy collections
User::lazy()->each(function ($user) {
processUser($user);
});
Laravel's cursor() method uses PHP generators under the hood, while chunk() processes records in batches. Choose based on whether you need to process items individually or in groups. Chunking is often better for operations that benefit from batch database writes.
String Memory Optimization
String operations can be surprisingly memory-intensive in PHP because strings are immutable. Each concatenation creates a new string, which can lead to excessive memory usage in loops.
The following examples show progressively better approaches to building large strings.
// Bad: String concatenation creates copies
$result = '';
foreach ($items as $item) {
$result .= $item . ','; // Creates new string each iteration
}
// Good: Use array and implode
$parts = [];
foreach ($items as $item) {
$parts[] = $item;
}
$result = implode(',', $parts);
// Better: Use output buffering for HTML
ob_start();
foreach ($items as $item) {
echo "<li>{$item}</li>";
}
$html = ob_get_clean();
// Stream large files instead of loading
function streamFile(string $path): void
{
$handle = fopen($path, 'rb');
while (!feof($handle)) {
echo fread($handle, 8192); // 8KB chunks
flush();
}
fclose($handle);
}
Output buffering and file streaming are particularly useful when generating large responses, as they avoid building the entire response in memory before sending it. The client can begin receiving data immediately.
JavaScript Memory Management
Memory Leaks
JavaScript's garbage collector automatically frees memory that's no longer referenced, but it's easy to accidentally keep references alive. These patterns are common sources of memory leaks in JavaScript applications.
Each example shows a leak and its fix. The key theme is cleanup: always remove event listeners, clear timers, and limit what closures capture.
// Leak 1: Forgotten timers
class LeakyComponent {
constructor() {
// Timer keeps reference to component
this.interval = setInterval(() => {
this.update();
}, 1000);
}
destroy() {
// Must clear timer!
clearInterval(this.interval);
}
}
// Leak 2: Event listeners not removed
class LeakyEventHandler {
constructor(element) {
this.element = element;
this.handler = () => this.handleClick();
element.addEventListener('click', this.handler);
}
destroy() {
// Must remove listener!
this.element.removeEventListener('click', this.handler);
}
}
// Leak 3: Closures holding references
function createHandler(largeData) {
// largeData kept in closure even if not needed
return function() {
console.log('clicked');
// largeData never used but can't be garbage collected
};
}
// Fix: Only capture what's needed
function createHandler(largeData) {
const id = largeData.id; // Extract only needed data
return function() {
console.log('clicked', id);
};
}
The key principle is to clean up after yourself: clear timers, remove event listeners, and avoid capturing more data in closures than necessary. In frameworks like React, use cleanup functions in useEffect.
WeakMap and WeakSet
WeakMap and WeakSet allow you to associate data with objects without preventing those objects from being garbage collected. This is perfect for caching metadata about objects you don't own.
Unlike regular Maps, the keys in WeakMaps don't prevent garbage collection. When the key object is collected, the entry disappears automatically.
// WeakMap: Keys can be garbage collected
const metadata = new WeakMap();
function attachMetadata(obj, data) {
metadata.set(obj, data);
}
let user = { name: 'John' };
attachMetadata(user, { lastAccessed: Date.now() });
user = null; // Object can be garbage collected
// WeakMap entry automatically removed
// Use case: Private data
const privateData = new WeakMap();
class User {
constructor(name, password) {
privateData.set(this, { password });
this.name = name;
}
checkPassword(password) {
return privateData.get(this).password === password;
}
}
// When User instance is garbage collected, private data is too
Unlike regular Maps, WeakMaps don't prevent their keys from being garbage collected. This makes them ideal for caching and storing private data without causing memory leaks.
Object Pooling
For applications that frequently create and destroy objects (like games or real-time visualizations), object pooling can significantly reduce garbage collection pauses by reusing objects instead of creating new ones.
This pattern pre-allocates objects and recycles them instead of letting them be garbage collected.
// Reuse objects instead of creating new ones
class ObjectPool {
constructor(factory, initialSize = 10) {
this.factory = factory;
this.pool = [];
for (let i = 0; i < initialSize; i++) {
this.pool.push(factory());
}
}
acquire() {
if (this.pool.length > 0) {
return this.pool.pop();
}
return this.factory();
}
release(obj) {
// Reset object state
if (obj.reset) {
obj.reset();
}
this.pool.push(obj);
}
}
// Usage for frequently created objects
class Particle {
constructor() {
this.reset();
}
reset() {
this.x = 0;
this.y = 0;
this.velocity = { x: 0, y: 0 };
}
}
const particlePool = new ObjectPool(() => new Particle(), 100);
function spawnParticle(x, y) {
const particle = particlePool.acquire();
particle.x = x;
particle.y = y;
return particle;
}
function despawnParticle(particle) {
particlePool.release(particle);
}
The reset method is crucial: it ensures that acquired objects are in a clean state, preventing data from one use from leaking into the next. Object pooling is most beneficial when object creation is expensive or happens very frequently.
Caching Strategies
LRU Cache
An LRU (Least Recently Used) cache automatically evicts the oldest unused entries when it reaches capacity. This pattern is essential for bounded caching where memory is limited.
This implementation maintains both a cache map and an order array to track access recency.
class LRUCache
{
private int $capacity;
private array $cache = [];
private array $order = [];
public function __construct(int $capacity)
{
$this->capacity = $capacity;
}
public function get(string $key): mixed
{
if (!isset($this->cache[$key])) {
return null;
}
// Move to end (most recently used)
$this->touch($key);
return $this->cache[$key];
}
public function put(string $key, mixed $value): void
{
if (isset($this->cache[$key])) {
$this->cache[$key] = $value;
$this->touch($key);
return;
}
// Evict if at capacity
if (count($this->cache) >= $this->capacity) {
$oldest = array_shift($this->order);
unset($this->cache[$oldest]);
}
$this->cache[$key] = $value;
$this->order[] = $key;
}
private function touch(string $key): void
{
$index = array_search($key, $this->order);
if ($index !== false) {
unset($this->order[$index]);
$this->order = array_values($this->order);
$this->order[] = $key;
}
}
}
// Usage
$cache = new LRUCache(1000);
$cache->put('user:123', $userData);
$user = $cache->get('user:123');
The order array tracks access order, while the cache array stores the actual values. When capacity is reached, the least recently used entry is evicted. This ensures your cache doesn't grow unbounded.
Weak Reference Cache
PHP 8.0 introduced WeakReference, allowing you to cache objects without preventing them from being garbage collected when memory is needed elsewhere.
This cache pattern is useful when you want to cache expensive objects but don't want to prevent garbage collection.
// PHP 8.0+ WeakReference
class WeakCache
{
private array $cache = [];
public function get(string $key): ?object
{
if (!isset($this->cache[$key])) {
return null;
}
$ref = $this->cache[$key];
$value = $ref->get();
if ($value === null) {
// Object was garbage collected
unset($this->cache[$key]);
return null;
}
return $value;
}
public function set(string $key, object $value): void
{
$this->cache[$key] = WeakReference::create($value);
}
}
// Cached objects can still be garbage collected when not referenced elsewhere
This pattern is useful for expensive-to-compute objects that you'd like to cache if memory permits, but can afford to recompute if needed. The cache acts as a hint rather than a guarantee.
Memory Profiling
PHP Profiling
Profiling helps you identify where memory is being consumed. Xdebug provides detailed profiling, but you can also instrument your code manually for production monitoring.
This profiler class captures memory snapshots at named checkpoints, allowing you to see how memory usage changes through your code.
// Xdebug memory profiling
// php.ini: xdebug.mode=profile, xdebug.output_dir=/tmp/xdebug
// Manual profiling
class MemoryProfiler
{
private array $checkpoints = [];
public function checkpoint(string $name): void
{
$this->checkpoints[$name] = [
'memory' => memory_get_usage(true),
'peak' => memory_get_peak_usage(true),
'time' => microtime(true),
];
}
public function report(): array
{
$report = [];
$previous = null;
foreach ($this->checkpoints as $name => $data) {
$report[$name] = [
'memory' => formatBytes($data['memory']),
'peak' => formatBytes($data['peak']),
];
if ($previous) {
$report[$name]['delta'] = formatBytes(
$data['memory'] - $previous['memory']
);
}
$previous = $data;
}
return $report;
}
}
// Usage
$profiler = new MemoryProfiler();
$profiler->checkpoint('start');
$data = loadLargeDataset();
$profiler->checkpoint('after_load');
processData($data);
$profiler->checkpoint('after_process');
print_r($profiler->report());
The delta between checkpoints tells you how much memory each operation consumed, helping you identify the most expensive operations. This is lightweight enough to use selectively in production.
JavaScript Profiling
Chrome DevTools provides powerful memory profiling capabilities. You can also use the Performance API for programmatic measurement.
The heap snapshot workflow is the most effective way to find memory leaks in JavaScript applications.
// Chrome DevTools Memory tab
// Programmatic memory measurement
console.log('Memory:', performance.memory);
// {
// jsHeapSizeLimit: 2172649472,
// totalJSHeapSize: 19619925,
// usedJSHeapSize: 16840001
// }
// Mark points for timeline
performance.mark('start-operation');
await heavyOperation();
performance.mark('end-operation');
performance.measure('operation', 'start-operation', 'end-operation');
// Heap snapshot comparison
// 1. Take snapshot before
// 2. Perform action
// 3. Take snapshot after
// 4. Compare in DevTools
The heap snapshot comparison workflow is particularly useful for finding memory leaks: if memory increases after an action that should be temporary, you have a leak. Compare the snapshots to see what objects were retained.
Best Practices
General Guidelines
These principles apply across languages and help you build memory-efficient applications from the start.
1. Avoid premature optimization
- Profile first, optimize second
- Focus on actual bottlenecks
2. Prefer streaming over loading
- Process data incrementally
- Use generators/cursors
3. Release references early
- Set to null when done
- Unsubscribe from events
4. Use appropriate data structures
- WeakMap for metadata
- Object pools for frequent allocations
5. Monitor production memory
- Track memory metrics
- Alert on unusual patterns
The first point is critical: don't optimize memory usage until you've measured and found an actual problem. Premature optimization often makes code harder to maintain without meaningful benefits.
Code Review Checklist
Use this checklist during code reviews to catch memory issues before they reach production.
## Memory Review Checklist
- [ ] Large datasets processed in chunks/streams?
- [ ] Event listeners cleaned up on destroy?
- [ ] Timers cleared when no longer needed?
- [ ] Closures not capturing unnecessary data?
- [ ] Caches have size limits?
- [ ] Temporary variables set to null after use?
- [ ] No unbounded array growth?
- [ ] Strings built efficiently (not concatenation loops)?
Having this checklist as part of your review process catches common issues before they become production problems.
Conclusion
Memory management requires understanding how your language allocates and frees memory. Use generators and chunking for large datasets, clean up event listeners and timers, and implement bounded caches with eviction policies. Profile memory usage to find actual problems rather than optimizing prematurely. WeakReferences and object pooling help in specific scenarios. Regular monitoring in production catches memory leaks before they cause outages.